welvet


Namewelvet JSON
Version 0.0.3 PyPI version JSON
download
home_pageNone
SummaryWrapper for Embedding Loom Via External (C-ABI) Toolchain — GPU-accelerated neural networks with WebGPU binding/bridge
upload_time2025-11-05 02:24:19
maintainerNone
docs_urlNone
authorOpenFluke / Samuel Watson
requires_python>=3.8
licenseApache-2.0
keywords neural-network machine-learning webgpu gpu deep-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # welvet - LOOM Python Bindings

**Wrapper for Embedding Loom Via External (C-ABI) Toolchain**

High-performance neural network library with WebGPU acceleration for Python via C-ABI bindings.

## Installation

```bash
pip install welvet
```

## Quick Start

```python
import welvet

# Create a neural network with GPU
network = welvet.create_network(
    input_size=4,
    grid_rows=1,
    grid_cols=1,
    layers_per_cell=2,  # 2 layers: hidden + output
    use_gpu=True
)

# Configure network architecture: 4 -> 8 -> 2
welvet.configure_sequential_network(
    network,
    layer_sizes=[4, 8, 2],
    activations=[welvet.Activation.RELU, welvet.Activation.SIGMOID]
)

# Training data
inputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]
targets = [[1.0, 0.0], [0.0, 1.0]]

# Train for 10 epochs
for epoch in range(10):
    loss = welvet.train_epoch(network, inputs, targets, learning_rate=0.1)
    print(f"Epoch {epoch+1}: loss = {loss:.4f}")

# Test the network
output = welvet.forward(network, [0.1, 0.2, 0.3, 0.4])
print(f"Output: {output}")

# Clean up
welvet.cleanup_gpu(network)
welvet.free_network(network)
```

## Features

- 🚀 **GPU Acceleration**: WebGPU-powered compute shaders for high performance
- 🎯 **Cross-Platform**: Pre-compiled binaries for Linux, macOS, Windows, Android
- 📦 **Easy Integration**: Simple Python API with high-level helpers
- ⚡ **Grid Architecture**: Flexible grid-based neural network topology
- 🔧 **Low-Level Access**: Direct control over layers and training loop
- 🎓 **Training Helpers**: Built-in functions for common training tasks

## API Reference

### Network Management

#### `create_network(input_size, grid_rows=2, grid_cols=2, layers_per_cell=3, use_gpu=False)`

Creates a new grid-based neural network.

**Parameters:**

- `input_size` (int): Number of input features
- `grid_rows` (int): Grid rows (default: 2)
- `grid_cols` (int): Grid columns (default: 2)
- `layers_per_cell` (int): Layers per grid cell (default: 3)
- `use_gpu` (bool): Enable GPU acceleration (default: False)

**Simplified API:**

- `create_network(input_size, hidden_size, output_size, use_gpu=False)` - Auto-calculates grid

**Returns:** Network handle (int)

#### `free_network(handle)`

Frees network resources.

**Parameters:**

- `handle` (int): Network handle

### Layer Configuration

#### `Activation` (Class)

Activation function constants:

- `Activation.RELU` (0) - ReLU activation
- `Activation.SIGMOID` (1) - Sigmoid activation
- `Activation.TANH` (2) - Tanh activation
- `Activation.LINEAR` (3) - Linear activation

#### `init_dense_layer(input_size, output_size, activation=0)`

Initialize a dense layer configuration.

**Parameters:**

- `input_size` (int): Input neurons
- `output_size` (int): Output neurons
- `activation` (int): Activation function (use `Activation` constants)

**Returns:** Layer configuration dict

#### `set_layer(handle, row, col, layer_index, layer_config)`

Set a layer in the network grid.

**Parameters:**

- `handle` (int): Network handle
- `row` (int): Grid row (0-indexed)
- `col` (int): Grid column (0-indexed)
- `layer_index` (int): Layer index in cell (0-indexed)
- `layer_config` (dict): Layer config from `init_dense_layer()`

#### `configure_sequential_network(handle, layer_sizes, activations=None)`

High-level helper to configure a simple feedforward network.

**Parameters:**

- `handle` (int): Network handle (must have 1x1 grid)
- `layer_sizes` (List[int]): Layer sizes `[input, hidden1, ..., output]`
- `activations` (List[int], optional): Activation for each layer. Defaults to ReLU for hidden, Sigmoid for output.

**Example:**

```python
net = create_network(input_size=784, grid_rows=1, grid_cols=1, layers_per_cell=2)
configure_sequential_network(net, [784, 128, 10])  # MNIST classifier
```

#### `get_network_info(handle)`

Get network information.

**Returns:** Dict with `type`, `gpu_enabled`, `grid_rows`, `grid_cols`, `layers_per_cell`, `total_layers`

### Operations

#### `forward(handle, input_data)`

Performs forward pass through the network.

**Parameters:**

- `handle` (int): Network handle
- `input_data` (List[float]): Input vector

**Returns:** Output vector (List[float])

#### `backward(handle, target_data)`

Performs backward pass for training.

**Parameters:**

- `handle` (int): Network handle
- `target_data` (List[float]): Target/label vector

#### `update_weights(handle, learning_rate)`

Updates network weights using computed gradients.

**Parameters:**

- `handle` (int): Network handle
- `learning_rate` (float): Learning rate for gradient descent

### Training Helpers

#### `train_epoch(handle, inputs, targets, learning_rate=0.01)`

Train the network for one epoch.

**Parameters:**

- `handle` (int): Network handle
- `inputs` (List[List[float]]): List of input vectors
- `targets` (List[List[float]]): List of target vectors
- `learning_rate` (float): Learning rate (default: 0.01)

**Returns:** Average loss for the epoch (float)

**Example:**

```python
loss = train_epoch(net, train_inputs, train_targets, learning_rate=0.1)
print(f"Epoch loss: {loss:.4f}")
```

### GPU Management

#### `initialize_gpu(handle)`

Explicitly initialize GPU resources.

**Returns:** True if successful, False otherwise

#### `cleanup_gpu(handle)`

Release GPU resources.

**Parameters:**

- `handle` (int): Network handle

#### `get_version()`

Get LOOM library version string.

**Returns:** Version string (e.g., "LOOM C ABI v1.0")

## Examples

### Basic Training Example

```python
import welvet

# Create network with GPU
net = welvet.create_network(
    input_size=4,
    grid_rows=1,
    grid_cols=1,
    layers_per_cell=2,
    use_gpu=True
)

# Configure architecture: 4 -> 8 -> 2
welvet.configure_sequential_network(net, [4, 8, 2])

# Training data
inputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]
targets = [[1.0, 0.0], [0.0, 1.0]]

# Train for 50 epochs
for epoch in range(50):
    loss = welvet.train_epoch(net, inputs, targets, learning_rate=0.1)
    if (epoch + 1) % 10 == 0:
        print(f"Epoch {epoch+1}: loss = {loss:.6f}")

# Test
output = welvet.forward(net, [0.1, 0.2, 0.3, 0.4])
print(f"Output: {output}")

# Cleanup
welvet.cleanup_gpu(net)
welvet.free_network(net)
```

### Custom Layer Configuration

```python
import welvet

# Create network
net = welvet.create_network(
    input_size=10,
    grid_rows=2,
    grid_cols=2,
    layers_per_cell=3,
    use_gpu=False
)

# Configure individual layers
for row in range(2):
    for col in range(2):
        # Layer 0: 10 -> 20 (ReLU)
        layer0 = welvet.init_dense_layer(10, 20, welvet.Activation.RELU)
        welvet.set_layer(net, row, col, 0, layer0)

        # Layer 1: 20 -> 15 (Tanh)
        layer1 = welvet.init_dense_layer(20, 15, welvet.Activation.TANH)
        welvet.set_layer(net, row, col, 1, layer1)

        # Layer 2: 15 -> 5 (Sigmoid)
        layer2 = welvet.init_dense_layer(15, 5, welvet.Activation.SIGMOID)
        welvet.set_layer(net, row, col, 2, layer2)

# Network is now configured
info = welvet.get_network_info(net)
print(f"Total layers: {info['total_layers']}")

welvet.free_network(net)
```

## Testing

Run the included examples to verify installation:

```bash
# Basic GPU training test
python examples/train_gpu.py
```

Or test programmatically:

```python
import welvet

# Test basic functionality
net = welvet.create_network(input_size=2, grid_rows=1, grid_cols=1,
                             layers_per_cell=1, use_gpu=False)
welvet.configure_sequential_network(net, [2, 4, 2])

# Verify forward pass works
output = welvet.forward(net, [0.5, 0.5])
assert len(output) == 2, "Forward pass failed"

# Verify training works
inputs = [[0.0, 0.0], [1.0, 1.0]]
targets = [[1.0, 0.0], [0.0, 1.0]]
loss = welvet.train_epoch(net, inputs, targets, learning_rate=0.1)
assert loss > 0, "Training failed"

welvet.free_network(net)
print("✅ All tests passed!")
```

## Platform Support

Pre-compiled binaries included for:

- **Linux**: x86_64, ARM64
- **macOS**: ARM64 (Apple Silicon)
- **Windows**: x86_64
- **Android**: ARM64

## Building from Source

See the main [LOOM repository](https://github.com/openfluke/loom) for building the C ABI from source.

## License

Apache License 2.0

## Links

- [GitHub Repository](https://github.com/openfluke/loom)
- [C ABI Documentation](https://github.com/openfluke/loom/tree/main/cabi)
- [Issue Tracker](https://github.com/openfluke/loom/issues)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "welvet",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "neural-network, machine-learning, webgpu, gpu, deep-learning",
    "author": "OpenFluke / Samuel Watson",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/53/eb/626fb8c4ca37af17de1a7e364e53d3c42396b1cc9eaa8e5172b598cbae56/welvet-0.0.3.tar.gz",
    "platform": null,
    "description": "# welvet - LOOM Python Bindings\n\n**Wrapper for Embedding Loom Via External (C-ABI) Toolchain**\n\nHigh-performance neural network library with WebGPU acceleration for Python via C-ABI bindings.\n\n## Installation\n\n```bash\npip install welvet\n```\n\n## Quick Start\n\n```python\nimport welvet\n\n# Create a neural network with GPU\nnetwork = welvet.create_network(\n    input_size=4,\n    grid_rows=1,\n    grid_cols=1,\n    layers_per_cell=2,  # 2 layers: hidden + output\n    use_gpu=True\n)\n\n# Configure network architecture: 4 -> 8 -> 2\nwelvet.configure_sequential_network(\n    network,\n    layer_sizes=[4, 8, 2],\n    activations=[welvet.Activation.RELU, welvet.Activation.SIGMOID]\n)\n\n# Training data\ninputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]\ntargets = [[1.0, 0.0], [0.0, 1.0]]\n\n# Train for 10 epochs\nfor epoch in range(10):\n    loss = welvet.train_epoch(network, inputs, targets, learning_rate=0.1)\n    print(f\"Epoch {epoch+1}: loss = {loss:.4f}\")\n\n# Test the network\noutput = welvet.forward(network, [0.1, 0.2, 0.3, 0.4])\nprint(f\"Output: {output}\")\n\n# Clean up\nwelvet.cleanup_gpu(network)\nwelvet.free_network(network)\n```\n\n## Features\n\n- \ud83d\ude80 **GPU Acceleration**: WebGPU-powered compute shaders for high performance\n- \ud83c\udfaf **Cross-Platform**: Pre-compiled binaries for Linux, macOS, Windows, Android\n- \ud83d\udce6 **Easy Integration**: Simple Python API with high-level helpers\n- \u26a1 **Grid Architecture**: Flexible grid-based neural network topology\n- \ud83d\udd27 **Low-Level Access**: Direct control over layers and training loop\n- \ud83c\udf93 **Training Helpers**: Built-in functions for common training tasks\n\n## API Reference\n\n### Network Management\n\n#### `create_network(input_size, grid_rows=2, grid_cols=2, layers_per_cell=3, use_gpu=False)`\n\nCreates a new grid-based neural network.\n\n**Parameters:**\n\n- `input_size` (int): Number of input features\n- `grid_rows` (int): Grid rows (default: 2)\n- `grid_cols` (int): Grid columns (default: 2)\n- `layers_per_cell` (int): Layers per grid cell (default: 3)\n- `use_gpu` (bool): Enable GPU acceleration (default: False)\n\n**Simplified API:**\n\n- `create_network(input_size, hidden_size, output_size, use_gpu=False)` - Auto-calculates grid\n\n**Returns:** Network handle (int)\n\n#### `free_network(handle)`\n\nFrees network resources.\n\n**Parameters:**\n\n- `handle` (int): Network handle\n\n### Layer Configuration\n\n#### `Activation` (Class)\n\nActivation function constants:\n\n- `Activation.RELU` (0) - ReLU activation\n- `Activation.SIGMOID` (1) - Sigmoid activation\n- `Activation.TANH` (2) - Tanh activation\n- `Activation.LINEAR` (3) - Linear activation\n\n#### `init_dense_layer(input_size, output_size, activation=0)`\n\nInitialize a dense layer configuration.\n\n**Parameters:**\n\n- `input_size` (int): Input neurons\n- `output_size` (int): Output neurons\n- `activation` (int): Activation function (use `Activation` constants)\n\n**Returns:** Layer configuration dict\n\n#### `set_layer(handle, row, col, layer_index, layer_config)`\n\nSet a layer in the network grid.\n\n**Parameters:**\n\n- `handle` (int): Network handle\n- `row` (int): Grid row (0-indexed)\n- `col` (int): Grid column (0-indexed)\n- `layer_index` (int): Layer index in cell (0-indexed)\n- `layer_config` (dict): Layer config from `init_dense_layer()`\n\n#### `configure_sequential_network(handle, layer_sizes, activations=None)`\n\nHigh-level helper to configure a simple feedforward network.\n\n**Parameters:**\n\n- `handle` (int): Network handle (must have 1x1 grid)\n- `layer_sizes` (List[int]): Layer sizes `[input, hidden1, ..., output]`\n- `activations` (List[int], optional): Activation for each layer. Defaults to ReLU for hidden, Sigmoid for output.\n\n**Example:**\n\n```python\nnet = create_network(input_size=784, grid_rows=1, grid_cols=1, layers_per_cell=2)\nconfigure_sequential_network(net, [784, 128, 10])  # MNIST classifier\n```\n\n#### `get_network_info(handle)`\n\nGet network information.\n\n**Returns:** Dict with `type`, `gpu_enabled`, `grid_rows`, `grid_cols`, `layers_per_cell`, `total_layers`\n\n### Operations\n\n#### `forward(handle, input_data)`\n\nPerforms forward pass through the network.\n\n**Parameters:**\n\n- `handle` (int): Network handle\n- `input_data` (List[float]): Input vector\n\n**Returns:** Output vector (List[float])\n\n#### `backward(handle, target_data)`\n\nPerforms backward pass for training.\n\n**Parameters:**\n\n- `handle` (int): Network handle\n- `target_data` (List[float]): Target/label vector\n\n#### `update_weights(handle, learning_rate)`\n\nUpdates network weights using computed gradients.\n\n**Parameters:**\n\n- `handle` (int): Network handle\n- `learning_rate` (float): Learning rate for gradient descent\n\n### Training Helpers\n\n#### `train_epoch(handle, inputs, targets, learning_rate=0.01)`\n\nTrain the network for one epoch.\n\n**Parameters:**\n\n- `handle` (int): Network handle\n- `inputs` (List[List[float]]): List of input vectors\n- `targets` (List[List[float]]): List of target vectors\n- `learning_rate` (float): Learning rate (default: 0.01)\n\n**Returns:** Average loss for the epoch (float)\n\n**Example:**\n\n```python\nloss = train_epoch(net, train_inputs, train_targets, learning_rate=0.1)\nprint(f\"Epoch loss: {loss:.4f}\")\n```\n\n### GPU Management\n\n#### `initialize_gpu(handle)`\n\nExplicitly initialize GPU resources.\n\n**Returns:** True if successful, False otherwise\n\n#### `cleanup_gpu(handle)`\n\nRelease GPU resources.\n\n**Parameters:**\n\n- `handle` (int): Network handle\n\n#### `get_version()`\n\nGet LOOM library version string.\n\n**Returns:** Version string (e.g., \"LOOM C ABI v1.0\")\n\n## Examples\n\n### Basic Training Example\n\n```python\nimport welvet\n\n# Create network with GPU\nnet = welvet.create_network(\n    input_size=4,\n    grid_rows=1,\n    grid_cols=1,\n    layers_per_cell=2,\n    use_gpu=True\n)\n\n# Configure architecture: 4 -> 8 -> 2\nwelvet.configure_sequential_network(net, [4, 8, 2])\n\n# Training data\ninputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]\ntargets = [[1.0, 0.0], [0.0, 1.0]]\n\n# Train for 50 epochs\nfor epoch in range(50):\n    loss = welvet.train_epoch(net, inputs, targets, learning_rate=0.1)\n    if (epoch + 1) % 10 == 0:\n        print(f\"Epoch {epoch+1}: loss = {loss:.6f}\")\n\n# Test\noutput = welvet.forward(net, [0.1, 0.2, 0.3, 0.4])\nprint(f\"Output: {output}\")\n\n# Cleanup\nwelvet.cleanup_gpu(net)\nwelvet.free_network(net)\n```\n\n### Custom Layer Configuration\n\n```python\nimport welvet\n\n# Create network\nnet = welvet.create_network(\n    input_size=10,\n    grid_rows=2,\n    grid_cols=2,\n    layers_per_cell=3,\n    use_gpu=False\n)\n\n# Configure individual layers\nfor row in range(2):\n    for col in range(2):\n        # Layer 0: 10 -> 20 (ReLU)\n        layer0 = welvet.init_dense_layer(10, 20, welvet.Activation.RELU)\n        welvet.set_layer(net, row, col, 0, layer0)\n\n        # Layer 1: 20 -> 15 (Tanh)\n        layer1 = welvet.init_dense_layer(20, 15, welvet.Activation.TANH)\n        welvet.set_layer(net, row, col, 1, layer1)\n\n        # Layer 2: 15 -> 5 (Sigmoid)\n        layer2 = welvet.init_dense_layer(15, 5, welvet.Activation.SIGMOID)\n        welvet.set_layer(net, row, col, 2, layer2)\n\n# Network is now configured\ninfo = welvet.get_network_info(net)\nprint(f\"Total layers: {info['total_layers']}\")\n\nwelvet.free_network(net)\n```\n\n## Testing\n\nRun the included examples to verify installation:\n\n```bash\n# Basic GPU training test\npython examples/train_gpu.py\n```\n\nOr test programmatically:\n\n```python\nimport welvet\n\n# Test basic functionality\nnet = welvet.create_network(input_size=2, grid_rows=1, grid_cols=1,\n                             layers_per_cell=1, use_gpu=False)\nwelvet.configure_sequential_network(net, [2, 4, 2])\n\n# Verify forward pass works\noutput = welvet.forward(net, [0.5, 0.5])\nassert len(output) == 2, \"Forward pass failed\"\n\n# Verify training works\ninputs = [[0.0, 0.0], [1.0, 1.0]]\ntargets = [[1.0, 0.0], [0.0, 1.0]]\nloss = welvet.train_epoch(net, inputs, targets, learning_rate=0.1)\nassert loss > 0, \"Training failed\"\n\nwelvet.free_network(net)\nprint(\"\u2705 All tests passed!\")\n```\n\n## Platform Support\n\nPre-compiled binaries included for:\n\n- **Linux**: x86_64, ARM64\n- **macOS**: ARM64 (Apple Silicon)\n- **Windows**: x86_64\n- **Android**: ARM64\n\n## Building from Source\n\nSee the main [LOOM repository](https://github.com/openfluke/loom) for building the C ABI from source.\n\n## License\n\nApache License 2.0\n\n## Links\n\n- [GitHub Repository](https://github.com/openfluke/loom)\n- [C ABI Documentation](https://github.com/openfluke/loom/tree/main/cabi)\n- [Issue Tracker](https://github.com/openfluke/loom/issues)\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Wrapper for Embedding Loom Via External (C-ABI) Toolchain \u2014 GPU-accelerated neural networks with WebGPU binding/bridge",
    "version": "0.0.3",
    "project_urls": {
        "Documentation": "https://github.com/openfluke/loom/tree/main/python",
        "Homepage": "https://github.com/openfluke/loom",
        "Issues": "https://github.com/openfluke/loom/issues",
        "Source": "https://github.com/openfluke/loom"
    },
    "split_keywords": [
        "neural-network",
        " machine-learning",
        " webgpu",
        " gpu",
        " deep-learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "309439e176f54a4da97338e07bdc925dcf466364330721155395db37afb705f3",
                "md5": "0a97cfd2198417156a9885104ee2cfb9",
                "sha256": "8e1b47dab2d71df981a2503035a5eec56fd155d48de30aa22c0124189196303f"
            },
            "downloads": -1,
            "filename": "welvet-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0a97cfd2198417156a9885104ee2cfb9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 32342826,
            "upload_time": "2025-11-05T02:23:59",
            "upload_time_iso_8601": "2025-11-05T02:23:59.775553Z",
            "url": "https://files.pythonhosted.org/packages/30/94/39e176f54a4da97338e07bdc925dcf466364330721155395db37afb705f3/welvet-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "53eb626fb8c4ca37af17de1a7e364e53d3c42396b1cc9eaa8e5172b598cbae56",
                "md5": "c8501fb129fa7a6589f7e9e3f2902fab",
                "sha256": "13263eeeb8cf7e6bc478dcce51f0c6e59650d908f1b2f7082296256750055b72"
            },
            "downloads": -1,
            "filename": "welvet-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "c8501fb129fa7a6589f7e9e3f2902fab",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 32146448,
            "upload_time": "2025-11-05T02:24:19",
            "upload_time_iso_8601": "2025-11-05T02:24:19.317923Z",
            "url": "https://files.pythonhosted.org/packages/53/eb/626fb8c4ca37af17de1a7e364e53d3c42396b1cc9eaa8e5172b598cbae56/welvet-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-11-05 02:24:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "openfluke",
    "github_project": "loom",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "welvet"
}
        
Elapsed time: 0.76183s