velocity-python


Namevelocity-python JSON
Version 0.0.95 PyPI version JSON
download
home_pageNone
SummaryA rapid application development library for interfacing with data storage
upload_time2025-08-09 04:24:24
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseMIT
keywords database orm sql rapid-development data-storage
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Velocity.DB

A modern Python database abstraction library that simplifies database operations across multiple database engines. Velocity.DB provides a unified interface for PostgreSQL, MySQL, SQLite, and SQL Server, with features like transaction management, automatic connection pooling, and database-agnostic query building.

## Core Design Philosophy

Velocity.DB is built around two fundamental concepts that make database programming intuitive and safe:

### 1. One Transaction Per Function Block

Every database operation must be wrapped in a single transaction using the `@engine.transaction` decorator. This ensures:

- **Atomicity**: All operations in a function either succeed together or fail together
- **Consistency**: Database state remains valid even if errors occur
- **Isolation**: Concurrent operations don't interfere with each other
- **Automatic cleanup**: Transactions commit on success, rollback on any exception

```python
@engine.transaction  # This entire function is one atomic operation
def transfer_money(tx, from_account_id, to_account_id, amount):
    # If ANY operation fails, ALL changes are automatically rolled back
    from_account = tx.table('accounts').find(from_account_id)
    to_account = tx.table('accounts').find(to_account_id)
    
    from_account['balance'] -= amount  # This change...
    to_account['balance'] += amount    # ...and this change happen together or not at all
    
    # No need to manually commit - happens automatically when function completes
```

### 2. Rows as Python Dictionaries

Database rows behave exactly like Python dictionaries, using familiar syntax:

```python
@engine.transaction
def work_with_user(tx):
    user = tx.table('users').find(123)
    
    # Read like a dictionary
    name = user['name']
    email = user['email']
    
    # Update like a dictionary  
    user['name'] = 'New Name'
    user['status'] = 'active'
    
    # Check existence like a dictionary
    if 'phone' in user:
        phone = user['phone']
    
    # Get all data like a dictionary
    user_data = dict(user)  # or user.to_dict()
```

This design eliminates the need to learn ORM-specific syntax while maintaining the power and flexibility of direct database access.

## Features

- **Multi-database support**: PostgreSQL, MySQL, SQLite, SQL Server
- **Transaction management**: Decorator-based transaction handling with automatic rollback
- **Query builder**: Database-agnostic SQL generation with foreign key expansion
- **Connection pooling**: Automatic connection management and pooling
- **Type safety**: Comprehensive type hints and validation
- **Modern Python**: Built for Python 3.8+ with modern packaging

## Supported Databases

- **PostgreSQL** (via psycopg2)
- **MySQL** (via mysqlclient)
- **SQLite** (built-in sqlite3)
- **SQL Server** (via pytds)

## Installation

Install the base package:

```bash
pip install velocity-python
```

Install with database-specific dependencies:

```bash
# For PostgreSQL
pip install velocity-python[postgres]

# For MySQL  
pip install velocity-python[mysql]

# For SQL Server
pip install velocity-python[sqlserver]

# For all databases
pip install velocity-python[all]
```

## Quick Start

### Database Connection

```python
import velocity.db

# PostgreSQL
engine = velocity.db.postgres(
    host="localhost",
    port=5432,
    database="mydb",
    user="username",
    password="password"
)

# MySQL
engine = velocity.db.mysql(
    host="localhost",
    port=3306,
    database="mydb",
    user="username", 
    password="password"
)

# SQLite
engine = velocity.db.sqlite("path/to/database.db")

# SQL Server
engine = velocity.db.sqlserver(
    host="localhost",
    port=1433,
    database="mydb",
    user="username",
    password="password"
### Transaction Management

Velocity.DB enforces a "one transaction per function" pattern using the `@engine.transaction` decorator. The decorator intelligently handles transaction injection:

#### How Transaction Injection Works

The `@engine.transaction` decorator automatically provides a transaction object, but **you must declare `tx` as a parameter** in your function signature:

```python
@engine.transaction
def create_user_with_profile(tx):  # ← You MUST declare 'tx' parameter
    # The engine automatically creates and injects a Transaction object here
    # 'tx' is provided by the decorator, not by the caller
    
    user = tx.table('users').new()
    user['name'] = 'John Doe'
    user['email'] = 'john@example.com'
    
    profile = tx.table('profiles').new()
    profile['user_id'] = user['sys_id']
    profile['bio'] = 'Software developer'
    
    return user['sys_id']

# When you call the function, you DON'T pass the tx argument:
user_id = create_user_with_profile()  # ← No 'tx' argument needed
```

#### The Magic Behind the Scenes

The decorator uses Python's `inspect` module to:

1. **Check the function signature** - Looks for a parameter named `tx`
2. **Automatic injection** - If `tx` is declared but not provided by caller, creates a new Transaction
3. **Parameter positioning** - Inserts the transaction object at the correct position in the argument list
4. **Transaction lifecycle** - Automatically commits on success or rolls back on exceptions

```python
@engine.transaction
def update_user_settings(tx, user_id, settings):  # ← 'tx' must be declared
    # Engine finds 'tx' in position 0, creates Transaction, and injects it
    user = tx.table('users').find(user_id)
    user['settings'] = settings
    user['last_updated'] = datetime.now()

# Call without providing 'tx' - the decorator handles it:
update_user_settings(123, {'theme': 'dark'})  # ← Only pass your parameters
```

#### Advanced: Transaction Reuse

If you want multiple function calls to be part of the same transaction, **explicitly pass the `tx` object** to chain operations together:

```python
@engine.transaction
def create_user(tx, name, email):
    user = tx.table('users').new()
    user['name'] = name
    user['email'] = email
    return user['sys_id']

@engine.transaction
def create_profile(tx, user_id, bio):
    profile = tx.table('profiles').new()
    profile['user_id'] = user_id
    profile['bio'] = bio
    return profile['sys_id']

@engine.transaction
def create_user_with_profile(tx, name, email, bio):
    # All operations in this function use the SAME transaction
    
    # Pass 'tx' to keep this call in the same transaction
    user_id = create_user(tx, name, email)  # ← Pass 'tx' explicitly
    
    # Pass 'tx' to keep this call in the same transaction too
    profile_id = create_profile(tx, user_id, bio)  # ← Pass 'tx' explicitly
    
    # If ANY operation fails, ALL changes are rolled back together
    return user_id

# When you call the main function, don't pass tx - let the decorator provide it:
user_id = create_user_with_profile('John', 'john@example.com', 'Developer')
```

#### Two Different Transaction Behaviors

```python
# Scenario 1: SAME transaction (pass tx through)
@engine.transaction
def atomic_operation(tx):
    create_user(tx, 'John', 'john@example.com')     # ← Part of same transaction
    create_profile(tx, user_id, 'Developer')        # ← Part of same transaction
    # If profile creation fails, user creation is also rolled back

# Scenario 2: SEPARATE transactions (don't pass tx)
@engine.transaction  
def separate_operations(tx):
    create_user('John', 'john@example.com')         # ← Creates its own transaction
    create_profile(user_id, 'Developer')           # ← Creates its own transaction  
    # If profile creation fails, user creation is NOT rolled back
```

**Key Rule**: To include function calls in the same transaction, **always pass the `tx` parameter explicitly**. If you don't pass `tx`, each decorated function creates its own separate transaction.

#### Class-Level Transaction Decoration

You can also apply `@engine.transaction` to an entire class, which automatically wraps **all methods** that have `tx` in their signature:

```python
@engine.transaction
class UserService:
    """All methods with 'tx' parameter get automatic transaction injection"""
    
    def create_user(self, tx, name, email):
        # This method gets automatic transaction injection
        user = tx.table('users').new()
        user['name'] = name
        user['email'] = email
        return user['sys_id']
    
    def update_user(self, tx, user_id, **kwargs):
        # This method also gets automatic transaction injection
        user = tx.table('users').find(user_id)
        for key, value in kwargs.items():
            user[key] = value
        return user.to_dict()
    
    def get_user_count(self):
        # This method is NOT wrapped (no 'tx' parameter)
        return "This method runs normally without transaction injection"
    
    def some_utility_method(self, data):
        # This method is NOT wrapped (no 'tx' parameter)
        return data.upper()

# Usage - each method call gets its own transaction automatically:
service = UserService()

# Each call creates its own transaction:
user_id = service.create_user('John', 'john@example.com')  # ← Own transaction
user_data = service.update_user(user_id, status='active')  # ← Own transaction

# Methods without 'tx' work normally:
count = service.get_user_count()  # ← No transaction injection
```

#### Combining Class and Method Transactions

```python
@engine.transaction
class UserService:
    
    def create_user(self, tx, name, email):
        user = tx.table('users').new()
        user['name'] = name
        user['email'] = email
        return user['sys_id']
    
    def create_profile(self, tx, user_id, bio):
        profile = tx.table('profiles').new()
        profile['user_id'] = user_id
        profile['bio'] = bio
        return profile['sys_id']
    
    def create_user_with_profile(self, tx, name, email, bio):
        # Share transaction across method calls within the same class
        user_id = self.create_user(tx, name, email)      # ← Pass tx to share transaction
        profile_id = self.create_profile(tx, user_id, bio)  # ← Pass tx to share transaction
        return user_id

# Usage:
service = UserService()
# This creates ONE transaction for all operations:
user_id = service.create_user_with_profile('John', 'john@example.com', 'Developer')
```

**Key Benefits:**
- **Automatic transaction management**: No need to call `begin()`, `commit()`, or `rollback()`
- **Intelligent injection**: Engine inspects your function and provides `tx` automatically
- **Parameter flexibility**: `tx` can be in any position in your function signature
- **Transaction reuse**: Pass existing transactions to chain operations together
- **Clear boundaries**: Each function represents a complete business operation
- **Testable**: Easy to test since each function is a complete unit of work

**Important Rules:**
- **Must declare `tx` parameter**: The function signature must include `tx` as a parameter
- **Don't pass `tx` when calling from outside**: Let the decorator provide it automatically for new transactions
- **DO pass `tx` for same transaction**: To include function calls in the same transaction, explicitly pass the `tx` parameter
- **Class decoration**: `@engine.transaction` on a class wraps all methods that have `tx` in their signature
- **Selective wrapping**: Methods without `tx` parameter are not affected by class-level decoration
- **No `_tx` parameter**: Using `_tx` as a parameter name is forbidden (reserved)
- **Position matters**: The decorator injects `tx` at the exact position declared in your signature

### Table Operations

#### Creating Tables

```python
@engine.transaction
def create_tables(tx):
    # Create a users table
    users = tx.table('users')
    users.create()
    
    # Add columns by treating the row like a dictionary
    user = users.new()  # Creates a new row object
    user['name'] = 'Sample User'        # Sets column values using dict syntax
    user['email'] = 'user@example.com'  # No need for setters/getters
    user['created_at'] = datetime.now() # Python types automatically handled
    
    # The row is automatically saved when the transaction completes
```

#### Selecting Data

```python
@engine.transaction
def query_users(tx):
    users = tx.table('users')
    
    # Select all users - returns list of dict-like row objects
    all_users = users.select().all()
    for user in all_users:
        print(f"User: {user['name']} ({user['email']})")  # Dict syntax
    
    # Select with conditions
    active_users = users.select(where={'status': 'active'}).all()
    
    # Select specific columns
    names = users.select(columns=['name', 'email']).all()
    
    # Select with ordering and limits
    recent = users.select(
        orderby='created_at DESC',
        qty=10
    ).all()
    
    # Find single record - returns dict-like row object
    user = users.find({'email': 'john@example.com'})
    if user:
        # Access like dictionary
        user_name = user['name']
        user_id = user['sys_id']
        
        # Check existence like dictionary
        has_phone = 'phone' in user
        
        # Convert to regular dict if needed
        user_dict = user.to_dict()
    
    # Get by primary key
    user = users.find(123)  # Returns dict-like row object or None
```

#### Updating Data

```python
@engine.transaction
def update_user(tx):
    users = tx.table('users')
    
    # Find and update using dictionary syntax
    user = users.find(123)  # Returns a row that behaves like a dict
    user['name'] = 'Updated Name'         # Direct assignment like a dict
    user['updated_at'] = datetime.now()   # No special methods needed
    
    # Check if columns exist before updating
    if 'phone' in user:
        user['phone'] = '+1-555-0123'
    
    # Get current values like a dictionary
    current_status = user.get('status', 'unknown')
    
    # Bulk update using where conditions
    users.update(
        {'status': 'inactive'},           # What to update (dict format)
        where={'<last_login': '2023-01-01'}  # Condition using operator prefix
    )
```

#### Inserting Data

```python
@engine.transaction
def create_users(tx):
    users = tx.table('users')
    
    # Method 1: Create new row and populate like a dictionary
    user = users.new()  # Creates empty row object
    user['name'] = 'New User'           # Assign values using dict syntax
    user['email'] = 'new@example.com'   # 
    # Row automatically saved when transaction completes
    
    # Method 2: Insert with dictionary data directly
    user_id = users.insert({
        'name': 'Another User',
        'email': 'another@example.com'
    })
    
    # Method 3: Upsert (insert or update) using dictionary syntax
    users.upsert(
        {'name': 'John Doe', 'status': 'active'},  # Data to insert/update
        {'email': 'john@example.com'}              # Matching condition
    )
```

#### Deleting Data

```python
@engine.transaction
def delete_users(tx):
    users = tx.table('users')
    
    # Delete single record
    user = users.find(123)
    user.delete()
    
    # Delete with conditions
    users.delete(where={'status': 'inactive'})
    
    # Truncate table
    users.truncate()
    
    # Drop table
    users.drop()
```

### Advanced Queries

#### Foreign Key Navigation

Velocity.DB supports automatic foreign key expansion using pointer syntax:

```python
@engine.transaction  
def get_user_with_profile(tx):
    users = tx.table('users')
    
    # Automatic join via foreign key
    users_with_profiles = users.select(
        columns=['name', 'email', 'profile_id>bio', 'profile_id>avatar_url'],
        where={'status': 'active'}
    ).all()
```

#### Complex Conditions

Velocity.DB supports various where clause formats:

```python
@engine.transaction
def complex_queries(tx):
    users = tx.table('users')
    
    # Dictionary format with operator prefixes
    results = users.select(where={
        'status': 'active',          # Equals (default)
        '>=created_at': '2023-01-01',  # Greater than or equal
        '><age': [18, 65],           # Between
        '%email': '@company.com',    # Like
        '!status': 'deleted'         # Not equal
    }).all()
    
    # List of tuples format for complex predicates
    results = users.select(where=[
        ('status = %s', 'active'),
        ('priority = %s OR urgency = %s', ('high', 'critical'))
    ]).all()
    
    # Raw string format
    results = users.select(where="status = 'active' AND age >= 18").all()
```

**Available Operators:**

| Operator | SQL Equivalent | Example Usage | Description |
|----------|----------------|---------------|-------------|
| `=` (default) | `=` | `{'name': 'John'}` | Equals (default when no operator specified) |
| `>` | `>` | `{'>age': 18}` | Greater than |
| `<` | `<` | `{'<score': 100}` | Less than |
| `>=` | `>=` | `{'>=created_at': '2023-01-01'}` | Greater than or equal |
| `<=` | `<=` | `{'<=updated_at': '2023-12-31'}` | Less than or equal |
| `!` | `<>` | `{'!status': 'deleted'}` | Not equal |
| `!=` | `<>` | `{'!=status': 'deleted'}` | Not equal (alternative) |
| `<>` | `<>` | `{'<>status': 'deleted'}` | Not equal (SQL style) |
| `%` | `LIKE` | `{'%email': '@company.com'}` | Like pattern matching |
| `!%` | `NOT LIKE` | `{'!%name': 'test%'}` | Not like pattern matching |
| `><` | `BETWEEN` | `{'><age': [18, 65]}` | Between two values (inclusive) |
| `!><` | `NOT BETWEEN` | `{'!><score': [0, 50]}` | Not between two values |
```

#### Aggregations and Grouping

```python
@engine.transaction
def analytics(tx):
    orders = tx.table('orders')
    
    # Count records
    total_orders = orders.count()
    recent_orders = orders.count(where={'>=created_at': '2023-01-01'})
    
    # Aggregations
    stats = orders.select(
        columns=['COUNT(*) as total', 'SUM(amount) as revenue', 'AVG(amount) as avg_order'],
        where={'status': 'completed'},
        groupby='customer_id'
    ).all()
```

### Raw SQL

When you need full control, execute raw SQL. The `tx.execute()` method returns a **Result object** that provides flexible data transformation:

```python
@engine.transaction
def raw_queries(tx):
    # Execute raw SQL - returns a Result object
    result = tx.execute("""
        SELECT u.name, u.email, COUNT(o.id) as order_count
        FROM users u
        LEFT JOIN orders o ON u.id = o.user_id
        WHERE u.status = %s
        GROUP BY u.id, u.name, u.email
        HAVING COUNT(o.id) > %s
    """, ['active', 5])
    
    # Multiple ways to work with the Result object:
    
    # Get all rows as list of dictionaries (default)
    rows = result.all()
    for row in rows:
        print(f"User: {row['name']} ({row['email']}) - {row['order_count']} orders")
    
    # Or iterate one row at a time
    for row in result:
        print(f"User: {row['name']}")
    
    # Transform data format
    result.as_tuple().all()        # List of tuples
    result.as_list().all()         # List of lists  
    result.as_json().all()         # List of JSON strings
    result.as_named_tuple().all()  # List of (name, value) pairs
    
    # Get single values
    total = tx.execute("SELECT COUNT(*) FROM users").scalar()
    
    # Get simple list of single column values
    names = tx.execute("SELECT name FROM users").as_simple_list().all()
    
    # Get just the first row
    first_user = tx.execute("SELECT * FROM users LIMIT 1").one()
```

#### Result Object Methods

The **Result object** returned by `tx.execute()` provides powerful data transformation capabilities:

| Method | Description | Returns |
|--------|-------------|---------|
| `.all()` | Get all rows at once | `List[Dict]` (default) or transformed format |
| `.one(default=None)` | Get first row only | `Dict` or `default` if no rows |
| `.scalar(default=None)` | Get first column of first row | Single value or `default` |
| `.batch(qty=1)` | Iterate in batches | Generator yielding lists of rows |
| **Data Format Transformations:** |
| `.as_dict()` | Rows as dictionaries (default) | `{'column': value, ...}` |
| `.as_tuple()` | Rows as tuples | `(value1, value2, ...)` |
| `.as_list()` | Rows as lists | `[value1, value2, ...]` |
| `.as_json()` | Rows as JSON strings | `'{"column": "value", ...}'` |
| `.as_named_tuple()` | Rows as name-value pairs | `[('column', value), ...]` |
| `.as_simple_list(pos=0)` | Extract single column | `value` (from position pos) |
| **Utility Methods:** |
| `.headers` | Get column names | `['col1', 'col2', ...]` |
| `.close()` | Close the cursor | `None` |
| `.enum()` | Add row numbers | `(index, row)` tuples |

```python
@engine.transaction  
def result_examples(tx):
    # Different output formats for the same query
    result = tx.execute("SELECT name, email FROM users LIMIT 3")
    
    # As dictionaries (default)
    dicts = result.as_dict().all()
    # [{'name': 'John', 'email': 'john@example.com'}, ...]
    
    # As tuples  
    tuples = result.as_tuple().all()
    # [('John', 'john@example.com'), ...]
    
    # As JSON strings
    json_rows = result.as_json().all()  
    # ['{"name": "John", "email": "john@example.com"}', ...]
    
    # Just email addresses
    emails = result.as_simple_list(1).all()  # Position 1 = email column
    # ['john@example.com', 'jane@example.com', ...]
    
    # With row numbers
    numbered = result.enum().all()
    # [(0, {'name': 'John', 'email': 'john@example.com'}), ...]
```

## Automatic Schema Evolution

One of Velocity.DB's most powerful features is **automatic table and column creation**. The library uses decorators to catch database schema errors and automatically evolve your schema as your code changes.

### How Automatic Creation Works

Velocity.DB uses the `@create_missing` decorator on key table operations. When you try to:

- **Insert data** with new columns
- **Update rows** with new columns  
- **Query tables** that don't exist
- **Reference columns** that don't exist

The library automatically:

1. **Catches the database error** (table missing, column missing)
2. **Analyzes the data** you're trying to work with
3. **Creates the missing table/columns** with appropriate types
4. **Retries the original operation** seamlessly

```python
@engine.transaction
def create_user_profile(tx):
    # This table and columns don't exist yet - that's OK!
    users = tx.table('users')  # Table will be created automatically
    
    # Insert data with new columns - they'll be created automatically
    user = users.new()
    user['name'] = 'John Doe'           # VARCHAR column created automatically
    user['age'] = 28                    # INTEGER column created automatically  
    user['salary'] = 75000.50           # NUMERIC column created automatically
    user['is_active'] = True            # BOOLEAN column created automatically
    user['bio'] = 'Software engineer'   # TEXT column created automatically
    
    # The table and all columns are now created and data is inserted
    return user['sys_id']

# Call this function - table and columns created seamlessly
user_id = create_user_profile()
```

### Type Inference

Velocity.DB automatically infers SQL types from Python values:

| Python Type | SQL Type (PostgreSQL) | SQL Type (MySQL) | SQL Type (SQLite) |
|-------------|------------------------|-------------------|-------------------|
| `str` | `TEXT` | `TEXT` | `TEXT` |
| `int` | `BIGINT` | `BIGINT` | `INTEGER` |
| `float` | `NUMERIC(19,6)` | `DECIMAL(19,6)` | `REAL` |
| `bool` | `BOOLEAN` | `BOOLEAN` | `INTEGER` |
| `datetime` | `TIMESTAMP` | `DATETIME` | `TEXT` |
| `date` | `DATE` | `DATE` | `TEXT` |

### Progressive Schema Evolution

Your schema evolves naturally as your application grows:

```python
# Week 1: Start simple
@engine.transaction
def create_basic_user(tx):
    users = tx.table('users')
    user = users.new()
    user['name'] = 'Alice'
    user['email'] = 'alice@example.com'
    return user['sys_id']

# Week 2: Add more fields
@engine.transaction  
def create_detailed_user(tx):
    users = tx.table('users')
    user = users.new()
    user['name'] = 'Bob'
    user['email'] = 'bob@example.com'
    user['phone'] = '+1-555-0123'      # New column added automatically
    user['department'] = 'Engineering'  # Another new column added automatically
    user['start_date'] = date.today()   # Date column added automatically
    return user['sys_id']

# Week 3: Even more fields
@engine.transaction
def create_full_user(tx):
    users = tx.table('users')  
    user = users.new()
    user['name'] = 'Carol'
    user['email'] = 'carol@example.com'
    user['phone'] = '+1-555-0124'
    user['department'] = 'Marketing'
    user['start_date'] = date.today()
    user['salary'] = 85000.00          # Salary column added automatically
    user['is_manager'] = True          # Boolean column added automatically
    user['notes'] = 'Excellent performer'  # Notes column added automatically
    return user['sys_id']
```

### Behind the Scenes

The `@create_missing` decorator works by:

```python
# This is what happens automatically:
def create_missing(func):
    def wrapper(self, *args, **kwds):
        try:
            # Try the original operation
            return func(self, *args, **kwds)
        except DbTableMissingError:
            # Table doesn't exist - create it from the data
            data = extract_data_from_args(args, kwds)
            self.create(data)  # Create table with inferred columns
            return func(self, *args, **kwds)  # Retry operation
        except DbColumnMissingError:
            # Column doesn't exist - add it to the table
            data = extract_data_from_args(args, kwds)
            self.alter(data)  # Add missing columns
            return func(self, *args, **kwds)  # Retry operation
    return wrapper
```

### Which Operations Are Protected

These table operations automatically create missing schema elements:

- `table.insert(data)` - Creates table and columns
- `table.update(data, where)` - Creates missing columns in data
- `table.merge(data, pk)` - Creates table and columns (upsert)
- `table.alter_type(column, type)` - Creates column if missing
- `table.alter(columns)` - Adds missing columns

### Manual Schema Control

If you prefer explicit control, you can disable automatic creation:

```python
@engine.transaction
def explicit_schema_control(tx):
    users = tx.table('users')
    
    # Check if table exists before using it
    if not users.exists():
        users.create({
            'name': str,
            'email': str,
            'age': int,
            'is_active': bool
        })
    
    # Check if column exists before using it
    if 'phone' not in users.column_names():
        users.alter({'phone': str})
    
    # Now safely use the table
    user = users.new()
    user['name'] = 'David'
    user['email'] = 'david@example.com'
    user['phone'] = '+1-555-0125'
```

### Development Benefits

**For Development:**
- **Rapid prototyping**: Focus on business logic, not database setup
- **Zero configuration**: No migration scripts or schema files needed
- **Natural evolution**: Schema grows with your application

**For Production:**
- **Controlled deployment**: Use `sql_only=True` to generate schema changes for review
- **Safe migrations**: Test automatic changes in staging environments
- **Backwards compatibility**: New columns are added, existing data preserved

```python
# Generate SQL for review without executing
@engine.transaction
def preview_schema_changes(tx):
    users = tx.table('users')
    
    # See what SQL would be generated
    sql, vals = users.insert({
        'name': 'Test User',
        'new_field': 'New Value'
    }, sql_only=True)
    
    print("SQL that would be executed:")
    print(sql)
    # Shows: ALTER TABLE users ADD COLUMN new_field TEXT; INSERT INTO users...
```

**Key Benefits:**
- **Zero-friction development**: Write code, not schema migrations
- **Type-safe evolution**: Python types automatically map to appropriate SQL types
- **Production-ready**: Generate reviewable SQL for controlled deployments
- **Database-agnostic**: Works consistently across PostgreSQL, MySQL, SQLite, and SQL Server

## Error Handling

The "one transaction per function" design automatically handles rollbacks on exceptions:

```python
@engine.transaction
def safe_transfer(tx, from_id, to_id, amount):
    try:
        # Multiple operations that must succeed together
        from_account = tx.table('accounts').find(from_id)
        to_account = tx.table('accounts').find(to_id)
        
        # Work with rows like dictionaries
        if from_account['balance'] < amount:
            raise ValueError("Insufficient funds")
            
        from_account['balance'] -= amount  # This change...
        to_account['balance'] += amount    # ...and this change are atomic
        
        # If any operation fails, entire transaction rolls back automatically
        
    except Exception as e:
        # Transaction automatically rolled back - no manual intervention needed
        logger.error(f"Transfer failed: {e}")
        raise  # Re-raise to let caller handle the business logic

@engine.transaction
def create_user_with_validation(tx, user_data):
    # Each function is a complete business operation
    users = tx.table('users')
    
    # Check if user already exists
    existing = users.find({'email': user_data['email']})
    if existing:
        raise ValueError("User already exists")
    
    # Create new user using dictionary interface
    user = users.new()
    user['name'] = user_data['name']
    user['email'] = user_data['email']
    user['created_at'] = datetime.now()
    
    # If we reach here, everything commits automatically
    return user['sys_id']
```

**Key Benefits of Transaction-Per-Function:**
- **Automatic rollback**: Any exception undoes all changes in that function
- **Clear error boundaries**: Each function represents one business operation
- **No resource leaks**: Connections and transactions are always properly cleaned up
- **Predictable behavior**: Functions either complete fully or have no effect

## Development

### Setting up for Development

This is currently a private repository. If you have access to the repository:

```bash
git clone <repository-url>
cd velocity-python
pip install -e .[dev]
```

### Running Tests

```bash
pytest tests/
```

### Code Quality

```bash
# Format code
black src/

# Type checking  
mypy src/

# Linting
flake8 src/
```

## License

This project is licensed under the MIT License - see the LICENSE file for details.

## Contributing

This is currently a private repository and we are not accepting public contributions at this time. However, this may change in the future based on community interest and project needs.

If you are interested in contributing to Velocity.DB, please reach out to discuss potential collaboration opportunities.

## Changelog

See [CHANGELOG.md](CHANGELOG.md) for a list of changes and version history.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "velocity-python",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "database, orm, sql, rapid-development, data-storage",
    "author": null,
    "author_email": "Velocity Team <contact@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/c9/17/8db5435fb43cc9bba947dcc83b31042ed0291b77e72586c032cc063626d0/velocity_python-0.0.95.tar.gz",
    "platform": null,
    "description": "# Velocity.DB\n\nA modern Python database abstraction library that simplifies database operations across multiple database engines. Velocity.DB provides a unified interface for PostgreSQL, MySQL, SQLite, and SQL Server, with features like transaction management, automatic connection pooling, and database-agnostic query building.\n\n## Core Design Philosophy\n\nVelocity.DB is built around two fundamental concepts that make database programming intuitive and safe:\n\n### 1. One Transaction Per Function Block\n\nEvery database operation must be wrapped in a single transaction using the `@engine.transaction` decorator. This ensures:\n\n- **Atomicity**: All operations in a function either succeed together or fail together\n- **Consistency**: Database state remains valid even if errors occur\n- **Isolation**: Concurrent operations don't interfere with each other\n- **Automatic cleanup**: Transactions commit on success, rollback on any exception\n\n```python\n@engine.transaction  # This entire function is one atomic operation\ndef transfer_money(tx, from_account_id, to_account_id, amount):\n    # If ANY operation fails, ALL changes are automatically rolled back\n    from_account = tx.table('accounts').find(from_account_id)\n    to_account = tx.table('accounts').find(to_account_id)\n    \n    from_account['balance'] -= amount  # This change...\n    to_account['balance'] += amount    # ...and this change happen together or not at all\n    \n    # No need to manually commit - happens automatically when function completes\n```\n\n### 2. Rows as Python Dictionaries\n\nDatabase rows behave exactly like Python dictionaries, using familiar syntax:\n\n```python\n@engine.transaction\ndef work_with_user(tx):\n    user = tx.table('users').find(123)\n    \n    # Read like a dictionary\n    name = user['name']\n    email = user['email']\n    \n    # Update like a dictionary  \n    user['name'] = 'New Name'\n    user['status'] = 'active'\n    \n    # Check existence like a dictionary\n    if 'phone' in user:\n        phone = user['phone']\n    \n    # Get all data like a dictionary\n    user_data = dict(user)  # or user.to_dict()\n```\n\nThis design eliminates the need to learn ORM-specific syntax while maintaining the power and flexibility of direct database access.\n\n## Features\n\n- **Multi-database support**: PostgreSQL, MySQL, SQLite, SQL Server\n- **Transaction management**: Decorator-based transaction handling with automatic rollback\n- **Query builder**: Database-agnostic SQL generation with foreign key expansion\n- **Connection pooling**: Automatic connection management and pooling\n- **Type safety**: Comprehensive type hints and validation\n- **Modern Python**: Built for Python 3.8+ with modern packaging\n\n## Supported Databases\n\n- **PostgreSQL** (via psycopg2)\n- **MySQL** (via mysqlclient)\n- **SQLite** (built-in sqlite3)\n- **SQL Server** (via pytds)\n\n## Installation\n\nInstall the base package:\n\n```bash\npip install velocity-python\n```\n\nInstall with database-specific dependencies:\n\n```bash\n# For PostgreSQL\npip install velocity-python[postgres]\n\n# For MySQL  \npip install velocity-python[mysql]\n\n# For SQL Server\npip install velocity-python[sqlserver]\n\n# For all databases\npip install velocity-python[all]\n```\n\n## Quick Start\n\n### Database Connection\n\n```python\nimport velocity.db\n\n# PostgreSQL\nengine = velocity.db.postgres(\n    host=\"localhost\",\n    port=5432,\n    database=\"mydb\",\n    user=\"username\",\n    password=\"password\"\n)\n\n# MySQL\nengine = velocity.db.mysql(\n    host=\"localhost\",\n    port=3306,\n    database=\"mydb\",\n    user=\"username\", \n    password=\"password\"\n)\n\n# SQLite\nengine = velocity.db.sqlite(\"path/to/database.db\")\n\n# SQL Server\nengine = velocity.db.sqlserver(\n    host=\"localhost\",\n    port=1433,\n    database=\"mydb\",\n    user=\"username\",\n    password=\"password\"\n### Transaction Management\n\nVelocity.DB enforces a \"one transaction per function\" pattern using the `@engine.transaction` decorator. The decorator intelligently handles transaction injection:\n\n#### How Transaction Injection Works\n\nThe `@engine.transaction` decorator automatically provides a transaction object, but **you must declare `tx` as a parameter** in your function signature:\n\n```python\n@engine.transaction\ndef create_user_with_profile(tx):  # \u2190 You MUST declare 'tx' parameter\n    # The engine automatically creates and injects a Transaction object here\n    # 'tx' is provided by the decorator, not by the caller\n    \n    user = tx.table('users').new()\n    user['name'] = 'John Doe'\n    user['email'] = 'john@example.com'\n    \n    profile = tx.table('profiles').new()\n    profile['user_id'] = user['sys_id']\n    profile['bio'] = 'Software developer'\n    \n    return user['sys_id']\n\n# When you call the function, you DON'T pass the tx argument:\nuser_id = create_user_with_profile()  # \u2190 No 'tx' argument needed\n```\n\n#### The Magic Behind the Scenes\n\nThe decorator uses Python's `inspect` module to:\n\n1. **Check the function signature** - Looks for a parameter named `tx`\n2. **Automatic injection** - If `tx` is declared but not provided by caller, creates a new Transaction\n3. **Parameter positioning** - Inserts the transaction object at the correct position in the argument list\n4. **Transaction lifecycle** - Automatically commits on success or rolls back on exceptions\n\n```python\n@engine.transaction\ndef update_user_settings(tx, user_id, settings):  # \u2190 'tx' must be declared\n    # Engine finds 'tx' in position 0, creates Transaction, and injects it\n    user = tx.table('users').find(user_id)\n    user['settings'] = settings\n    user['last_updated'] = datetime.now()\n\n# Call without providing 'tx' - the decorator handles it:\nupdate_user_settings(123, {'theme': 'dark'})  # \u2190 Only pass your parameters\n```\n\n#### Advanced: Transaction Reuse\n\nIf you want multiple function calls to be part of the same transaction, **explicitly pass the `tx` object** to chain operations together:\n\n```python\n@engine.transaction\ndef create_user(tx, name, email):\n    user = tx.table('users').new()\n    user['name'] = name\n    user['email'] = email\n    return user['sys_id']\n\n@engine.transaction\ndef create_profile(tx, user_id, bio):\n    profile = tx.table('profiles').new()\n    profile['user_id'] = user_id\n    profile['bio'] = bio\n    return profile['sys_id']\n\n@engine.transaction\ndef create_user_with_profile(tx, name, email, bio):\n    # All operations in this function use the SAME transaction\n    \n    # Pass 'tx' to keep this call in the same transaction\n    user_id = create_user(tx, name, email)  # \u2190 Pass 'tx' explicitly\n    \n    # Pass 'tx' to keep this call in the same transaction too\n    profile_id = create_profile(tx, user_id, bio)  # \u2190 Pass 'tx' explicitly\n    \n    # If ANY operation fails, ALL changes are rolled back together\n    return user_id\n\n# When you call the main function, don't pass tx - let the decorator provide it:\nuser_id = create_user_with_profile('John', 'john@example.com', 'Developer')\n```\n\n#### Two Different Transaction Behaviors\n\n```python\n# Scenario 1: SAME transaction (pass tx through)\n@engine.transaction\ndef atomic_operation(tx):\n    create_user(tx, 'John', 'john@example.com')     # \u2190 Part of same transaction\n    create_profile(tx, user_id, 'Developer')        # \u2190 Part of same transaction\n    # If profile creation fails, user creation is also rolled back\n\n# Scenario 2: SEPARATE transactions (don't pass tx)\n@engine.transaction  \ndef separate_operations(tx):\n    create_user('John', 'john@example.com')         # \u2190 Creates its own transaction\n    create_profile(user_id, 'Developer')           # \u2190 Creates its own transaction  \n    # If profile creation fails, user creation is NOT rolled back\n```\n\n**Key Rule**: To include function calls in the same transaction, **always pass the `tx` parameter explicitly**. If you don't pass `tx`, each decorated function creates its own separate transaction.\n\n#### Class-Level Transaction Decoration\n\nYou can also apply `@engine.transaction` to an entire class, which automatically wraps **all methods** that have `tx` in their signature:\n\n```python\n@engine.transaction\nclass UserService:\n    \"\"\"All methods with 'tx' parameter get automatic transaction injection\"\"\"\n    \n    def create_user(self, tx, name, email):\n        # This method gets automatic transaction injection\n        user = tx.table('users').new()\n        user['name'] = name\n        user['email'] = email\n        return user['sys_id']\n    \n    def update_user(self, tx, user_id, **kwargs):\n        # This method also gets automatic transaction injection\n        user = tx.table('users').find(user_id)\n        for key, value in kwargs.items():\n            user[key] = value\n        return user.to_dict()\n    \n    def get_user_count(self):\n        # This method is NOT wrapped (no 'tx' parameter)\n        return \"This method runs normally without transaction injection\"\n    \n    def some_utility_method(self, data):\n        # This method is NOT wrapped (no 'tx' parameter)\n        return data.upper()\n\n# Usage - each method call gets its own transaction automatically:\nservice = UserService()\n\n# Each call creates its own transaction:\nuser_id = service.create_user('John', 'john@example.com')  # \u2190 Own transaction\nuser_data = service.update_user(user_id, status='active')  # \u2190 Own transaction\n\n# Methods without 'tx' work normally:\ncount = service.get_user_count()  # \u2190 No transaction injection\n```\n\n#### Combining Class and Method Transactions\n\n```python\n@engine.transaction\nclass UserService:\n    \n    def create_user(self, tx, name, email):\n        user = tx.table('users').new()\n        user['name'] = name\n        user['email'] = email\n        return user['sys_id']\n    \n    def create_profile(self, tx, user_id, bio):\n        profile = tx.table('profiles').new()\n        profile['user_id'] = user_id\n        profile['bio'] = bio\n        return profile['sys_id']\n    \n    def create_user_with_profile(self, tx, name, email, bio):\n        # Share transaction across method calls within the same class\n        user_id = self.create_user(tx, name, email)      # \u2190 Pass tx to share transaction\n        profile_id = self.create_profile(tx, user_id, bio)  # \u2190 Pass tx to share transaction\n        return user_id\n\n# Usage:\nservice = UserService()\n# This creates ONE transaction for all operations:\nuser_id = service.create_user_with_profile('John', 'john@example.com', 'Developer')\n```\n\n**Key Benefits:**\n- **Automatic transaction management**: No need to call `begin()`, `commit()`, or `rollback()`\n- **Intelligent injection**: Engine inspects your function and provides `tx` automatically\n- **Parameter flexibility**: `tx` can be in any position in your function signature\n- **Transaction reuse**: Pass existing transactions to chain operations together\n- **Clear boundaries**: Each function represents a complete business operation\n- **Testable**: Easy to test since each function is a complete unit of work\n\n**Important Rules:**\n- **Must declare `tx` parameter**: The function signature must include `tx` as a parameter\n- **Don't pass `tx` when calling from outside**: Let the decorator provide it automatically for new transactions\n- **DO pass `tx` for same transaction**: To include function calls in the same transaction, explicitly pass the `tx` parameter\n- **Class decoration**: `@engine.transaction` on a class wraps all methods that have `tx` in their signature\n- **Selective wrapping**: Methods without `tx` parameter are not affected by class-level decoration\n- **No `_tx` parameter**: Using `_tx` as a parameter name is forbidden (reserved)\n- **Position matters**: The decorator injects `tx` at the exact position declared in your signature\n\n### Table Operations\n\n#### Creating Tables\n\n```python\n@engine.transaction\ndef create_tables(tx):\n    # Create a users table\n    users = tx.table('users')\n    users.create()\n    \n    # Add columns by treating the row like a dictionary\n    user = users.new()  # Creates a new row object\n    user['name'] = 'Sample User'        # Sets column values using dict syntax\n    user['email'] = 'user@example.com'  # No need for setters/getters\n    user['created_at'] = datetime.now() # Python types automatically handled\n    \n    # The row is automatically saved when the transaction completes\n```\n\n#### Selecting Data\n\n```python\n@engine.transaction\ndef query_users(tx):\n    users = tx.table('users')\n    \n    # Select all users - returns list of dict-like row objects\n    all_users = users.select().all()\n    for user in all_users:\n        print(f\"User: {user['name']} ({user['email']})\")  # Dict syntax\n    \n    # Select with conditions\n    active_users = users.select(where={'status': 'active'}).all()\n    \n    # Select specific columns\n    names = users.select(columns=['name', 'email']).all()\n    \n    # Select with ordering and limits\n    recent = users.select(\n        orderby='created_at DESC',\n        qty=10\n    ).all()\n    \n    # Find single record - returns dict-like row object\n    user = users.find({'email': 'john@example.com'})\n    if user:\n        # Access like dictionary\n        user_name = user['name']\n        user_id = user['sys_id']\n        \n        # Check existence like dictionary\n        has_phone = 'phone' in user\n        \n        # Convert to regular dict if needed\n        user_dict = user.to_dict()\n    \n    # Get by primary key\n    user = users.find(123)  # Returns dict-like row object or None\n```\n\n#### Updating Data\n\n```python\n@engine.transaction\ndef update_user(tx):\n    users = tx.table('users')\n    \n    # Find and update using dictionary syntax\n    user = users.find(123)  # Returns a row that behaves like a dict\n    user['name'] = 'Updated Name'         # Direct assignment like a dict\n    user['updated_at'] = datetime.now()   # No special methods needed\n    \n    # Check if columns exist before updating\n    if 'phone' in user:\n        user['phone'] = '+1-555-0123'\n    \n    # Get current values like a dictionary\n    current_status = user.get('status', 'unknown')\n    \n    # Bulk update using where conditions\n    users.update(\n        {'status': 'inactive'},           # What to update (dict format)\n        where={'<last_login': '2023-01-01'}  # Condition using operator prefix\n    )\n```\n\n#### Inserting Data\n\n```python\n@engine.transaction\ndef create_users(tx):\n    users = tx.table('users')\n    \n    # Method 1: Create new row and populate like a dictionary\n    user = users.new()  # Creates empty row object\n    user['name'] = 'New User'           # Assign values using dict syntax\n    user['email'] = 'new@example.com'   # \n    # Row automatically saved when transaction completes\n    \n    # Method 2: Insert with dictionary data directly\n    user_id = users.insert({\n        'name': 'Another User',\n        'email': 'another@example.com'\n    })\n    \n    # Method 3: Upsert (insert or update) using dictionary syntax\n    users.upsert(\n        {'name': 'John Doe', 'status': 'active'},  # Data to insert/update\n        {'email': 'john@example.com'}              # Matching condition\n    )\n```\n\n#### Deleting Data\n\n```python\n@engine.transaction\ndef delete_users(tx):\n    users = tx.table('users')\n    \n    # Delete single record\n    user = users.find(123)\n    user.delete()\n    \n    # Delete with conditions\n    users.delete(where={'status': 'inactive'})\n    \n    # Truncate table\n    users.truncate()\n    \n    # Drop table\n    users.drop()\n```\n\n### Advanced Queries\n\n#### Foreign Key Navigation\n\nVelocity.DB supports automatic foreign key expansion using pointer syntax:\n\n```python\n@engine.transaction  \ndef get_user_with_profile(tx):\n    users = tx.table('users')\n    \n    # Automatic join via foreign key\n    users_with_profiles = users.select(\n        columns=['name', 'email', 'profile_id>bio', 'profile_id>avatar_url'],\n        where={'status': 'active'}\n    ).all()\n```\n\n#### Complex Conditions\n\nVelocity.DB supports various where clause formats:\n\n```python\n@engine.transaction\ndef complex_queries(tx):\n    users = tx.table('users')\n    \n    # Dictionary format with operator prefixes\n    results = users.select(where={\n        'status': 'active',          # Equals (default)\n        '>=created_at': '2023-01-01',  # Greater than or equal\n        '><age': [18, 65],           # Between\n        '%email': '@company.com',    # Like\n        '!status': 'deleted'         # Not equal\n    }).all()\n    \n    # List of tuples format for complex predicates\n    results = users.select(where=[\n        ('status = %s', 'active'),\n        ('priority = %s OR urgency = %s', ('high', 'critical'))\n    ]).all()\n    \n    # Raw string format\n    results = users.select(where=\"status = 'active' AND age >= 18\").all()\n```\n\n**Available Operators:**\n\n| Operator | SQL Equivalent | Example Usage | Description |\n|----------|----------------|---------------|-------------|\n| `=` (default) | `=` | `{'name': 'John'}` | Equals (default when no operator specified) |\n| `>` | `>` | `{'>age': 18}` | Greater than |\n| `<` | `<` | `{'<score': 100}` | Less than |\n| `>=` | `>=` | `{'>=created_at': '2023-01-01'}` | Greater than or equal |\n| `<=` | `<=` | `{'<=updated_at': '2023-12-31'}` | Less than or equal |\n| `!` | `<>` | `{'!status': 'deleted'}` | Not equal |\n| `!=` | `<>` | `{'!=status': 'deleted'}` | Not equal (alternative) |\n| `<>` | `<>` | `{'<>status': 'deleted'}` | Not equal (SQL style) |\n| `%` | `LIKE` | `{'%email': '@company.com'}` | Like pattern matching |\n| `!%` | `NOT LIKE` | `{'!%name': 'test%'}` | Not like pattern matching |\n| `><` | `BETWEEN` | `{'><age': [18, 65]}` | Between two values (inclusive) |\n| `!><` | `NOT BETWEEN` | `{'!><score': [0, 50]}` | Not between two values |\n```\n\n#### Aggregations and Grouping\n\n```python\n@engine.transaction\ndef analytics(tx):\n    orders = tx.table('orders')\n    \n    # Count records\n    total_orders = orders.count()\n    recent_orders = orders.count(where={'>=created_at': '2023-01-01'})\n    \n    # Aggregations\n    stats = orders.select(\n        columns=['COUNT(*) as total', 'SUM(amount) as revenue', 'AVG(amount) as avg_order'],\n        where={'status': 'completed'},\n        groupby='customer_id'\n    ).all()\n```\n\n### Raw SQL\n\nWhen you need full control, execute raw SQL. The `tx.execute()` method returns a **Result object** that provides flexible data transformation:\n\n```python\n@engine.transaction\ndef raw_queries(tx):\n    # Execute raw SQL - returns a Result object\n    result = tx.execute(\"\"\"\n        SELECT u.name, u.email, COUNT(o.id) as order_count\n        FROM users u\n        LEFT JOIN orders o ON u.id = o.user_id\n        WHERE u.status = %s\n        GROUP BY u.id, u.name, u.email\n        HAVING COUNT(o.id) > %s\n    \"\"\", ['active', 5])\n    \n    # Multiple ways to work with the Result object:\n    \n    # Get all rows as list of dictionaries (default)\n    rows = result.all()\n    for row in rows:\n        print(f\"User: {row['name']} ({row['email']}) - {row['order_count']} orders\")\n    \n    # Or iterate one row at a time\n    for row in result:\n        print(f\"User: {row['name']}\")\n    \n    # Transform data format\n    result.as_tuple().all()        # List of tuples\n    result.as_list().all()         # List of lists  \n    result.as_json().all()         # List of JSON strings\n    result.as_named_tuple().all()  # List of (name, value) pairs\n    \n    # Get single values\n    total = tx.execute(\"SELECT COUNT(*) FROM users\").scalar()\n    \n    # Get simple list of single column values\n    names = tx.execute(\"SELECT name FROM users\").as_simple_list().all()\n    \n    # Get just the first row\n    first_user = tx.execute(\"SELECT * FROM users LIMIT 1\").one()\n```\n\n#### Result Object Methods\n\nThe **Result object** returned by `tx.execute()` provides powerful data transformation capabilities:\n\n| Method | Description | Returns |\n|--------|-------------|---------|\n| `.all()` | Get all rows at once | `List[Dict]` (default) or transformed format |\n| `.one(default=None)` | Get first row only | `Dict` or `default` if no rows |\n| `.scalar(default=None)` | Get first column of first row | Single value or `default` |\n| `.batch(qty=1)` | Iterate in batches | Generator yielding lists of rows |\n| **Data Format Transformations:** |\n| `.as_dict()` | Rows as dictionaries (default) | `{'column': value, ...}` |\n| `.as_tuple()` | Rows as tuples | `(value1, value2, ...)` |\n| `.as_list()` | Rows as lists | `[value1, value2, ...]` |\n| `.as_json()` | Rows as JSON strings | `'{\"column\": \"value\", ...}'` |\n| `.as_named_tuple()` | Rows as name-value pairs | `[('column', value), ...]` |\n| `.as_simple_list(pos=0)` | Extract single column | `value` (from position pos) |\n| **Utility Methods:** |\n| `.headers` | Get column names | `['col1', 'col2', ...]` |\n| `.close()` | Close the cursor | `None` |\n| `.enum()` | Add row numbers | `(index, row)` tuples |\n\n```python\n@engine.transaction  \ndef result_examples(tx):\n    # Different output formats for the same query\n    result = tx.execute(\"SELECT name, email FROM users LIMIT 3\")\n    \n    # As dictionaries (default)\n    dicts = result.as_dict().all()\n    # [{'name': 'John', 'email': 'john@example.com'}, ...]\n    \n    # As tuples  \n    tuples = result.as_tuple().all()\n    # [('John', 'john@example.com'), ...]\n    \n    # As JSON strings\n    json_rows = result.as_json().all()  \n    # ['{\"name\": \"John\", \"email\": \"john@example.com\"}', ...]\n    \n    # Just email addresses\n    emails = result.as_simple_list(1).all()  # Position 1 = email column\n    # ['john@example.com', 'jane@example.com', ...]\n    \n    # With row numbers\n    numbered = result.enum().all()\n    # [(0, {'name': 'John', 'email': 'john@example.com'}), ...]\n```\n\n## Automatic Schema Evolution\n\nOne of Velocity.DB's most powerful features is **automatic table and column creation**. The library uses decorators to catch database schema errors and automatically evolve your schema as your code changes.\n\n### How Automatic Creation Works\n\nVelocity.DB uses the `@create_missing` decorator on key table operations. When you try to:\n\n- **Insert data** with new columns\n- **Update rows** with new columns  \n- **Query tables** that don't exist\n- **Reference columns** that don't exist\n\nThe library automatically:\n\n1. **Catches the database error** (table missing, column missing)\n2. **Analyzes the data** you're trying to work with\n3. **Creates the missing table/columns** with appropriate types\n4. **Retries the original operation** seamlessly\n\n```python\n@engine.transaction\ndef create_user_profile(tx):\n    # This table and columns don't exist yet - that's OK!\n    users = tx.table('users')  # Table will be created automatically\n    \n    # Insert data with new columns - they'll be created automatically\n    user = users.new()\n    user['name'] = 'John Doe'           # VARCHAR column created automatically\n    user['age'] = 28                    # INTEGER column created automatically  \n    user['salary'] = 75000.50           # NUMERIC column created automatically\n    user['is_active'] = True            # BOOLEAN column created automatically\n    user['bio'] = 'Software engineer'   # TEXT column created automatically\n    \n    # The table and all columns are now created and data is inserted\n    return user['sys_id']\n\n# Call this function - table and columns created seamlessly\nuser_id = create_user_profile()\n```\n\n### Type Inference\n\nVelocity.DB automatically infers SQL types from Python values:\n\n| Python Type | SQL Type (PostgreSQL) | SQL Type (MySQL) | SQL Type (SQLite) |\n|-------------|------------------------|-------------------|-------------------|\n| `str` | `TEXT` | `TEXT` | `TEXT` |\n| `int` | `BIGINT` | `BIGINT` | `INTEGER` |\n| `float` | `NUMERIC(19,6)` | `DECIMAL(19,6)` | `REAL` |\n| `bool` | `BOOLEAN` | `BOOLEAN` | `INTEGER` |\n| `datetime` | `TIMESTAMP` | `DATETIME` | `TEXT` |\n| `date` | `DATE` | `DATE` | `TEXT` |\n\n### Progressive Schema Evolution\n\nYour schema evolves naturally as your application grows:\n\n```python\n# Week 1: Start simple\n@engine.transaction\ndef create_basic_user(tx):\n    users = tx.table('users')\n    user = users.new()\n    user['name'] = 'Alice'\n    user['email'] = 'alice@example.com'\n    return user['sys_id']\n\n# Week 2: Add more fields\n@engine.transaction  \ndef create_detailed_user(tx):\n    users = tx.table('users')\n    user = users.new()\n    user['name'] = 'Bob'\n    user['email'] = 'bob@example.com'\n    user['phone'] = '+1-555-0123'      # New column added automatically\n    user['department'] = 'Engineering'  # Another new column added automatically\n    user['start_date'] = date.today()   # Date column added automatically\n    return user['sys_id']\n\n# Week 3: Even more fields\n@engine.transaction\ndef create_full_user(tx):\n    users = tx.table('users')  \n    user = users.new()\n    user['name'] = 'Carol'\n    user['email'] = 'carol@example.com'\n    user['phone'] = '+1-555-0124'\n    user['department'] = 'Marketing'\n    user['start_date'] = date.today()\n    user['salary'] = 85000.00          # Salary column added automatically\n    user['is_manager'] = True          # Boolean column added automatically\n    user['notes'] = 'Excellent performer'  # Notes column added automatically\n    return user['sys_id']\n```\n\n### Behind the Scenes\n\nThe `@create_missing` decorator works by:\n\n```python\n# This is what happens automatically:\ndef create_missing(func):\n    def wrapper(self, *args, **kwds):\n        try:\n            # Try the original operation\n            return func(self, *args, **kwds)\n        except DbTableMissingError:\n            # Table doesn't exist - create it from the data\n            data = extract_data_from_args(args, kwds)\n            self.create(data)  # Create table with inferred columns\n            return func(self, *args, **kwds)  # Retry operation\n        except DbColumnMissingError:\n            # Column doesn't exist - add it to the table\n            data = extract_data_from_args(args, kwds)\n            self.alter(data)  # Add missing columns\n            return func(self, *args, **kwds)  # Retry operation\n    return wrapper\n```\n\n### Which Operations Are Protected\n\nThese table operations automatically create missing schema elements:\n\n- `table.insert(data)` - Creates table and columns\n- `table.update(data, where)` - Creates missing columns in data\n- `table.merge(data, pk)` - Creates table and columns (upsert)\n- `table.alter_type(column, type)` - Creates column if missing\n- `table.alter(columns)` - Adds missing columns\n\n### Manual Schema Control\n\nIf you prefer explicit control, you can disable automatic creation:\n\n```python\n@engine.transaction\ndef explicit_schema_control(tx):\n    users = tx.table('users')\n    \n    # Check if table exists before using it\n    if not users.exists():\n        users.create({\n            'name': str,\n            'email': str,\n            'age': int,\n            'is_active': bool\n        })\n    \n    # Check if column exists before using it\n    if 'phone' not in users.column_names():\n        users.alter({'phone': str})\n    \n    # Now safely use the table\n    user = users.new()\n    user['name'] = 'David'\n    user['email'] = 'david@example.com'\n    user['phone'] = '+1-555-0125'\n```\n\n### Development Benefits\n\n**For Development:**\n- **Rapid prototyping**: Focus on business logic, not database setup\n- **Zero configuration**: No migration scripts or schema files needed\n- **Natural evolution**: Schema grows with your application\n\n**For Production:**\n- **Controlled deployment**: Use `sql_only=True` to generate schema changes for review\n- **Safe migrations**: Test automatic changes in staging environments\n- **Backwards compatibility**: New columns are added, existing data preserved\n\n```python\n# Generate SQL for review without executing\n@engine.transaction\ndef preview_schema_changes(tx):\n    users = tx.table('users')\n    \n    # See what SQL would be generated\n    sql, vals = users.insert({\n        'name': 'Test User',\n        'new_field': 'New Value'\n    }, sql_only=True)\n    \n    print(\"SQL that would be executed:\")\n    print(sql)\n    # Shows: ALTER TABLE users ADD COLUMN new_field TEXT; INSERT INTO users...\n```\n\n**Key Benefits:**\n- **Zero-friction development**: Write code, not schema migrations\n- **Type-safe evolution**: Python types automatically map to appropriate SQL types\n- **Production-ready**: Generate reviewable SQL for controlled deployments\n- **Database-agnostic**: Works consistently across PostgreSQL, MySQL, SQLite, and SQL Server\n\n## Error Handling\n\nThe \"one transaction per function\" design automatically handles rollbacks on exceptions:\n\n```python\n@engine.transaction\ndef safe_transfer(tx, from_id, to_id, amount):\n    try:\n        # Multiple operations that must succeed together\n        from_account = tx.table('accounts').find(from_id)\n        to_account = tx.table('accounts').find(to_id)\n        \n        # Work with rows like dictionaries\n        if from_account['balance'] < amount:\n            raise ValueError(\"Insufficient funds\")\n            \n        from_account['balance'] -= amount  # This change...\n        to_account['balance'] += amount    # ...and this change are atomic\n        \n        # If any operation fails, entire transaction rolls back automatically\n        \n    except Exception as e:\n        # Transaction automatically rolled back - no manual intervention needed\n        logger.error(f\"Transfer failed: {e}\")\n        raise  # Re-raise to let caller handle the business logic\n\n@engine.transaction\ndef create_user_with_validation(tx, user_data):\n    # Each function is a complete business operation\n    users = tx.table('users')\n    \n    # Check if user already exists\n    existing = users.find({'email': user_data['email']})\n    if existing:\n        raise ValueError(\"User already exists\")\n    \n    # Create new user using dictionary interface\n    user = users.new()\n    user['name'] = user_data['name']\n    user['email'] = user_data['email']\n    user['created_at'] = datetime.now()\n    \n    # If we reach here, everything commits automatically\n    return user['sys_id']\n```\n\n**Key Benefits of Transaction-Per-Function:**\n- **Automatic rollback**: Any exception undoes all changes in that function\n- **Clear error boundaries**: Each function represents one business operation\n- **No resource leaks**: Connections and transactions are always properly cleaned up\n- **Predictable behavior**: Functions either complete fully or have no effect\n\n## Development\n\n### Setting up for Development\n\nThis is currently a private repository. If you have access to the repository:\n\n```bash\ngit clone <repository-url>\ncd velocity-python\npip install -e .[dev]\n```\n\n### Running Tests\n\n```bash\npytest tests/\n```\n\n### Code Quality\n\n```bash\n# Format code\nblack src/\n\n# Type checking  \nmypy src/\n\n# Linting\nflake8 src/\n```\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Contributing\n\nThis is currently a private repository and we are not accepting public contributions at this time. However, this may change in the future based on community interest and project needs.\n\nIf you are interested in contributing to Velocity.DB, please reach out to discuss potential collaboration opportunities.\n\n## Changelog\n\nSee [CHANGELOG.md](CHANGELOG.md) for a list of changes and version history.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A rapid application development library for interfacing with data storage",
    "version": "0.0.95",
    "project_urls": {
        "Homepage": "https://codeclubs.org/projects/velocity"
    },
    "split_keywords": [
        "database",
        " orm",
        " sql",
        " rapid-development",
        " data-storage"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1e7d3fca850568dfb9a5784310d8bea9d00a0c169fdf46db9ad7a45684ff4043",
                "md5": "12a14293ca8ffe00c69dd62ccf003175",
                "sha256": "f207fa9ed40eb5f5676b2850e547de06788583093509331e499cd051de29e895"
            },
            "downloads": -1,
            "filename": "velocity_python-0.0.95-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "12a14293ca8ffe00c69dd62ccf003175",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 97518,
            "upload_time": "2025-08-09T04:24:22",
            "upload_time_iso_8601": "2025-08-09T04:24:22.937364Z",
            "url": "https://files.pythonhosted.org/packages/1e/7d/3fca850568dfb9a5784310d8bea9d00a0c169fdf46db9ad7a45684ff4043/velocity_python-0.0.95-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c9178db5435fb43cc9bba947dcc83b31042ed0291b77e72586c032cc063626d0",
                "md5": "5b13162a40d1edc40dd151ef245fcb6a",
                "sha256": "99f3fc105855672323638185b9213ab6c343be2015b1f2ad202618d68a6ed20d"
            },
            "downloads": -1,
            "filename": "velocity_python-0.0.95.tar.gz",
            "has_sig": false,
            "md5_digest": "5b13162a40d1edc40dd151ef245fcb6a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 112399,
            "upload_time": "2025-08-09T04:24:24",
            "upload_time_iso_8601": "2025-08-09T04:24:24.161525Z",
            "url": "https://files.pythonhosted.org/packages/c9/17/8db5435fb43cc9bba947dcc83b31042ed0291b77e72586c032cc063626d0/velocity_python-0.0.95.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-09 04:24:24",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "velocity-python"
}
        
Elapsed time: 0.67303s