tensor-iroh


Nametensor-iroh JSON
Version 0.1.1.dev2 PyPI version JSON
download
home_pageNone
SummaryHigh-performance PyO3 bindings for tensor_iroh
upload_time2025-07-22 04:15:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords tensor p2p iroh machine-learning
VCS
bugtrack_url
requirements numpy torch
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Tensor Protocol - Direct Streaming Implementation

This is a **direct streaming tensor transfer protocol** built on top of Iroh's QUIC networking stack. Unlike traditional approaches, this implementation streams tensor data directly over QUIC connections for maximum performance with intelligent connection pooling.

## Why This Tensor Protocol is Faster

### **1. Direct QUIC Streaming vs Traditional Approaches**
- **Traditional**: Request → Store → Download (3-step process with disk I/O)
- **This Protocol**: Direct stream transfer (1-step, memory-to-memory)
- **Performance Gain**: 10-100x faster for repeated sends due to connection reuse

### **2. Connection Pooling Architecture**
- **Smart Reuse**: Maintains QUIC connections for 5 minutes after use
- **Zero Setup Overhead**: Subsequent sends to same peer skip connection establishment
- **Latency Reduction**: Saves 100-500ms per send after initial connection
### **3. Zero-Copy Design**
- **Memory Efficiency**: Tensors stream directly without intermediate buffers
- **Reduced GC Pressure**: Minimal Python object creation during transfer

## Key Features

- **Direct QUIC Streaming**: Tensors are sent directly over QUIC streams without intermediate blob storage
- **Connection Pooling**: Intelligent reuse of QUIC connections for improved performance
- **Zero-Copy Design**: Minimal data copying for efficient memory usage
- **Dual Python Bindings**: Both PyO3 (high-performance) and UniFFI (stable) options
- **Async/Await Support**: Full async support for non-blocking operations
- **Security**: TLS 1.3 encryption by default

## Architecture

### Core Components

1. **TensorProtocolHandler**: Implements Iroh's `ProtocolHandler` trait for custom protocol handling
2. **TensorNode**: Main API for sending/receiving tensors with connection pooling
3. **ConnectionPool**: Manages QUIC connection reuse for performance optimization
4. **Direct Streaming**: Uses QUIC bidirectional streams for immediate data transfer
5. **Custom ALPN**: Uses `"tensor-iroh/direct/0"` for protocol identification

### Connection Pool Architecture
```rust
pub struct ConnectionPool {
    connections: Arc<AsyncMutex<HashMap<String, PooledConnection>>>,
    max_idle_time: Duration,    // 5 minutes default
    max_connections: usize,      // 10 connections default
}

pub struct PooledConnection {
    connections: Connection,     // Iroh QUIC connection
    last_used: Instant,         // Last usage timestamp
    is_idle: bool,              // Connection state
}
```

### Protocol Flow

```
Node A                    Node B
  |                         |
  |-- Connect (QUIC) ------>|
  |   (or reuse existing)   |-- Accept Connection
  |-- Send TensorMessage -->|
  |    (with tensor data)   |-- Process & Store
  |                         |
  |-- Return to Pool ------>|
  |   (connection reuse)    |
```

### Message Types

```rust
enum TensorMessage {
    Request { tensor_name: String },           // Request specific tensor
    Response { tensor_name: String, data: TensorData }, // Send tensor data
    Error { message: String },                 // Error response
}
```

## Performance Optimizations

### Connection Pooling Benefits

- **Reduced Latency**: Eliminates connection setup overhead (~100-500ms per send)
- **Better Throughput**: Maintained connections have superior performance
- **Resource Efficiency**: Fewer active connections to manage
- **Scalability**: Handles high-frequency tensor sends efficiently

### Pool Management

- **Automatic Cleanup**: Idle connections are cleaned up after 5 minutes
- **Thread Safety**: Uses `tokio::sync::AsyncMutex` for async-aware locking
- **Connection Limits**: Maximum 10 concurrent connections per node
- **Smart Reuse**: Connections are marked idle and reused for subsequent sends

## Performance Comparison

| Feature | Traditional Approaches | This Tensor Protocol |
|---------|----------------------|---------------------|
| **Latency** | High (3-step process) | Low (direct transfer + connection reuse) |
| **Memory** | Stores data on disk | Streams directly with pooling |
| **Complexity** | Request→Store→Download | Single stream transfer |
| **Scalability** | Limited by storage | Limited by network + connection pool |
| **Use Case** | Large, persistent data | Real-time ML inference |
| **Performance** | Network + storage overhead | Optimized for repeated sends |
| **Connection Reuse** | None | Intelligent pooling (5min idle) |
| **Setup Overhead** | Per-request | Once per peer |

## Building and Testing

### Prerequisites

- Rust 1.70+
- Python 3.8+
- WSL (for Windows users)

### Build Options

#### Option 1: PyO3 Bindings (Recommended for Performance)
```bash
# Build PyO3 wheel with torch support
chmod +x build_pyo3_bindings.sh
./build_pyo3_bindings.sh

# Test PyO3 bindings
python python/test_tensor_protocol_pyo3.py
```

#### Option 2: UniFFI Bindings (Stable)
```bash
# Build UniFFI bindings
chmod +x build_uniffi_and_test.sh
./build_uniffi_and_test.sh

# Test UniFFI bindings
python python/test_tensor_protocol_uniffi.py
```

### Manual Build

```bash
# Navigate to the project directory
cd tensor-iroh

# Build Rust library
cargo build --release

# For PyO3 bindings
maturin build --release -F "python,torch" --out ./target/wheels

# For UniFFI bindings
uniffi-bindgen generate src/tensor_protocol.udl --language python --out-dir .
mkdir -p tensor_protocol_py
cp tensor_protocol.py tensor_protocol_py/
# Copy library files as shown in build_uniffi_and_test.sh
```

## Usage Example

### PyO3 Bindings (Recommended)
```python
import asyncio
import tensor_iroh as tp

async def main():
    # Create nodes (with connection pooling enabled)
    sender = tp.PyTensorNode()
    receiver = tp.PyTensorNode()
    
    # Start nodes
    await sender.start()
    await receiver.start()
    
    # Get addresses
    receiver_addr = await receiver.get_node_addr()
    
    # Create tensor data
    tensor_data = tp.PyTensorData(
        b"tensor_bytes_here",  # raw bytes
        [2, 3],               # shape
        "float32",            # dtype
        False                 # requires_grad
    )
    
    # Send tensor directly (connection will be pooled)
    await sender.send_tensor(receiver_addr, "my_tensor", tensor_data)
    
    # Send again to same peer (connection will be reused - much faster!)
    await sender.send_tensor(receiver_addr, "my_tensor2", tensor_data)
    
    # Check pool size (should be 1 for single peer)
    pool_size = await sender.pool_size()
    print(f"Connection pool size: {pool_size}")
    
    # Receive tensor
    received = await receiver.receive_tensor()
    if received:
        print(f"Received tensor shape: {received.shape}")
    
    # Cleanup
    sender.shutdown()
    receiver.shutdown()

asyncio.run(main())
```

### UniFFI Bindings
```python
import asyncio
from tensor_protocol import create_node, TensorData, TensorMetadata

async def main():
    # Create nodes (with connection pooling enabled)
    sender = create_node(None)
    receiver = create_node(None)
    
    # Start nodes
    await sender.start()
    await receiver.start()
    
    # Get addresses
    receiver_addr = await receiver.get_node_addr()
    
    # Create tensor
    tensor_data = TensorData(
        metadata=TensorMetadata(
            shape=[2, 3],
            dtype="float32",
            requires_grad=False
        ),
        data=b"tensor_bytes_here"
    )
    
    # Send tensor directly (connection will be pooled)
    await sender.send_tensor_direct(receiver_addr, "my_tensor", tensor_data)
    
    # Send again to same peer (connection will be reused)
    await sender.send_tensor_direct(receiver_addr, "my_tensor2", tensor_data)
    
    # Check pool size (should be 1 for single peer)
    pool_size = await sender.get_pool_size()
    print(f"Connection pool size: {pool_size}")
    
    # Receive tensor
    received = await receiver.receive_tensor()
    print(f"Received tensor: {received}")
    
    # Cleanup
    sender.shutdown()
    receiver.shutdown()

asyncio.run(main())
```

## Performance Characteristics

- **Small tensors** (< 1MB): ~1-5ms latency
- **Large tensors** (> 100MB): Limited by network bandwidth
- **Connection reuse**: ~100-500ms saved per subsequent send to same peer
- **Throughput**: Optimized by connection pooling
- **Memory usage**: Minimal buffering, streaming design with intelligent pooling

## Comprehensive Testing

The protocol includes 13 comprehensive stress tests:

1. **Basic Functionality**: Core tensor send/receive
2. **Pull/Request Pattern**: Control plane operations
3. **Concurrent Sends**: Race condition testing
4. **Rapid Fire Sends**: Timing stress testing
5. **Large Tensor Transfer**: 1MB+ tensor handling
6. **Multiple Receivers**: Broadcast scenarios
7. **Send Before Ready**: Timing edge cases
8. **Immediate Shutdown**: Resource cleanup
9. **Timeout Scenarios**: Network timeout handling
10. **Non-existent Tensor**: Error handling
11. **Bad Ticket Parsing**: Invalid address handling
12. **Post-shutdown Behavior**: Cleanup validation
13. **Connection Pool Reuse**: Pool functionality validation


## Error Handling

The protocol includes comprehensive error handling:

- `TensorError::Io`: Network I/O errors
- `TensorError::Serialization`: Data serialization errors  
- `TensorError::Connection`: QUIC connection errors
- `TensorError::Protocol`: Protocol-level errors

## Thread Safety

- **Async-aware locking**: Uses `tokio::sync::AsyncMutex` for connection pool
- **Non-blocking operations**: All async operations are non-blocking
- **Concurrent access**: Multiple threads can safely access the connection pool
- **Resource management**: Automatic cleanup of idle connections

## Future Enhancements

1. **Compression**: Add tensor compression for network efficiency
2. **Streaming**: Support for tensor streaming (partial sends)
3. **Authentication**: Add peer authentication and authorization
4. **Monitoring**: Add metrics and performance monitoring
5. **Batching**: Support for batched tensor transfers
6. **Pool Metrics**: Connection pool performance monitoring
7. **Adaptive Pooling**: Dynamic pool size based on usage patterns

## License

This implementation is designed to be compatible with Iroh's dual Apache-2.0/MIT license structure. 


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "tensor-iroh",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "tensor, p2p, iroh, machine-learning",
    "author": null,
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "# Tensor Protocol - Direct Streaming Implementation\r\n\r\nThis is a **direct streaming tensor transfer protocol** built on top of Iroh's QUIC networking stack. Unlike traditional approaches, this implementation streams tensor data directly over QUIC connections for maximum performance with intelligent connection pooling.\r\n\r\n## Why This Tensor Protocol is Faster\r\n\r\n### **1. Direct QUIC Streaming vs Traditional Approaches**\r\n- **Traditional**: Request \u2192 Store \u2192 Download (3-step process with disk I/O)\r\n- **This Protocol**: Direct stream transfer (1-step, memory-to-memory)\r\n- **Performance Gain**: 10-100x faster for repeated sends due to connection reuse\r\n\r\n### **2. Connection Pooling Architecture**\r\n- **Smart Reuse**: Maintains QUIC connections for 5 minutes after use\r\n- **Zero Setup Overhead**: Subsequent sends to same peer skip connection establishment\r\n- **Latency Reduction**: Saves 100-500ms per send after initial connection\r\n### **3. Zero-Copy Design**\r\n- **Memory Efficiency**: Tensors stream directly without intermediate buffers\r\n- **Reduced GC Pressure**: Minimal Python object creation during transfer\r\n\r\n## Key Features\r\n\r\n- **Direct QUIC Streaming**: Tensors are sent directly over QUIC streams without intermediate blob storage\r\n- **Connection Pooling**: Intelligent reuse of QUIC connections for improved performance\r\n- **Zero-Copy Design**: Minimal data copying for efficient memory usage\r\n- **Dual Python Bindings**: Both PyO3 (high-performance) and UniFFI (stable) options\r\n- **Async/Await Support**: Full async support for non-blocking operations\r\n- **Security**: TLS 1.3 encryption by default\r\n\r\n## Architecture\r\n\r\n### Core Components\r\n\r\n1. **TensorProtocolHandler**: Implements Iroh's `ProtocolHandler` trait for custom protocol handling\r\n2. **TensorNode**: Main API for sending/receiving tensors with connection pooling\r\n3. **ConnectionPool**: Manages QUIC connection reuse for performance optimization\r\n4. **Direct Streaming**: Uses QUIC bidirectional streams for immediate data transfer\r\n5. **Custom ALPN**: Uses `\"tensor-iroh/direct/0\"` for protocol identification\r\n\r\n### Connection Pool Architecture\r\n```rust\r\npub struct ConnectionPool {\r\n    connections: Arc<AsyncMutex<HashMap<String, PooledConnection>>>,\r\n    max_idle_time: Duration,    // 5 minutes default\r\n    max_connections: usize,      // 10 connections default\r\n}\r\n\r\npub struct PooledConnection {\r\n    connections: Connection,     // Iroh QUIC connection\r\n    last_used: Instant,         // Last usage timestamp\r\n    is_idle: bool,              // Connection state\r\n}\r\n```\r\n\r\n### Protocol Flow\r\n\r\n```\r\nNode A                    Node B\r\n  |                         |\r\n  |-- Connect (QUIC) ------>|\r\n  |   (or reuse existing)   |-- Accept Connection\r\n  |-- Send TensorMessage -->|\r\n  |    (with tensor data)   |-- Process & Store\r\n  |                         |\r\n  |-- Return to Pool ------>|\r\n  |   (connection reuse)    |\r\n```\r\n\r\n### Message Types\r\n\r\n```rust\r\nenum TensorMessage {\r\n    Request { tensor_name: String },           // Request specific tensor\r\n    Response { tensor_name: String, data: TensorData }, // Send tensor data\r\n    Error { message: String },                 // Error response\r\n}\r\n```\r\n\r\n## Performance Optimizations\r\n\r\n### Connection Pooling Benefits\r\n\r\n- **Reduced Latency**: Eliminates connection setup overhead (~100-500ms per send)\r\n- **Better Throughput**: Maintained connections have superior performance\r\n- **Resource Efficiency**: Fewer active connections to manage\r\n- **Scalability**: Handles high-frequency tensor sends efficiently\r\n\r\n### Pool Management\r\n\r\n- **Automatic Cleanup**: Idle connections are cleaned up after 5 minutes\r\n- **Thread Safety**: Uses `tokio::sync::AsyncMutex` for async-aware locking\r\n- **Connection Limits**: Maximum 10 concurrent connections per node\r\n- **Smart Reuse**: Connections are marked idle and reused for subsequent sends\r\n\r\n## Performance Comparison\r\n\r\n| Feature | Traditional Approaches | This Tensor Protocol |\r\n|---------|----------------------|---------------------|\r\n| **Latency** | High (3-step process) | Low (direct transfer + connection reuse) |\r\n| **Memory** | Stores data on disk | Streams directly with pooling |\r\n| **Complexity** | Request\u2192Store\u2192Download | Single stream transfer |\r\n| **Scalability** | Limited by storage | Limited by network + connection pool |\r\n| **Use Case** | Large, persistent data | Real-time ML inference |\r\n| **Performance** | Network + storage overhead | Optimized for repeated sends |\r\n| **Connection Reuse** | None | Intelligent pooling (5min idle) |\r\n| **Setup Overhead** | Per-request | Once per peer |\r\n\r\n## Building and Testing\r\n\r\n### Prerequisites\r\n\r\n- Rust 1.70+\r\n- Python 3.8+\r\n- WSL (for Windows users)\r\n\r\n### Build Options\r\n\r\n#### Option 1: PyO3 Bindings (Recommended for Performance)\r\n```bash\r\n# Build PyO3 wheel with torch support\r\nchmod +x build_pyo3_bindings.sh\r\n./build_pyo3_bindings.sh\r\n\r\n# Test PyO3 bindings\r\npython python/test_tensor_protocol_pyo3.py\r\n```\r\n\r\n#### Option 2: UniFFI Bindings (Stable)\r\n```bash\r\n# Build UniFFI bindings\r\nchmod +x build_uniffi_and_test.sh\r\n./build_uniffi_and_test.sh\r\n\r\n# Test UniFFI bindings\r\npython python/test_tensor_protocol_uniffi.py\r\n```\r\n\r\n### Manual Build\r\n\r\n```bash\r\n# Navigate to the project directory\r\ncd tensor-iroh\r\n\r\n# Build Rust library\r\ncargo build --release\r\n\r\n# For PyO3 bindings\r\nmaturin build --release -F \"python,torch\" --out ./target/wheels\r\n\r\n# For UniFFI bindings\r\nuniffi-bindgen generate src/tensor_protocol.udl --language python --out-dir .\r\nmkdir -p tensor_protocol_py\r\ncp tensor_protocol.py tensor_protocol_py/\r\n# Copy library files as shown in build_uniffi_and_test.sh\r\n```\r\n\r\n## Usage Example\r\n\r\n### PyO3 Bindings (Recommended)\r\n```python\r\nimport asyncio\r\nimport tensor_iroh as tp\r\n\r\nasync def main():\r\n    # Create nodes (with connection pooling enabled)\r\n    sender = tp.PyTensorNode()\r\n    receiver = tp.PyTensorNode()\r\n    \r\n    # Start nodes\r\n    await sender.start()\r\n    await receiver.start()\r\n    \r\n    # Get addresses\r\n    receiver_addr = await receiver.get_node_addr()\r\n    \r\n    # Create tensor data\r\n    tensor_data = tp.PyTensorData(\r\n        b\"tensor_bytes_here\",  # raw bytes\r\n        [2, 3],               # shape\r\n        \"float32\",            # dtype\r\n        False                 # requires_grad\r\n    )\r\n    \r\n    # Send tensor directly (connection will be pooled)\r\n    await sender.send_tensor(receiver_addr, \"my_tensor\", tensor_data)\r\n    \r\n    # Send again to same peer (connection will be reused - much faster!)\r\n    await sender.send_tensor(receiver_addr, \"my_tensor2\", tensor_data)\r\n    \r\n    # Check pool size (should be 1 for single peer)\r\n    pool_size = await sender.pool_size()\r\n    print(f\"Connection pool size: {pool_size}\")\r\n    \r\n    # Receive tensor\r\n    received = await receiver.receive_tensor()\r\n    if received:\r\n        print(f\"Received tensor shape: {received.shape}\")\r\n    \r\n    # Cleanup\r\n    sender.shutdown()\r\n    receiver.shutdown()\r\n\r\nasyncio.run(main())\r\n```\r\n\r\n### UniFFI Bindings\r\n```python\r\nimport asyncio\r\nfrom tensor_protocol import create_node, TensorData, TensorMetadata\r\n\r\nasync def main():\r\n    # Create nodes (with connection pooling enabled)\r\n    sender = create_node(None)\r\n    receiver = create_node(None)\r\n    \r\n    # Start nodes\r\n    await sender.start()\r\n    await receiver.start()\r\n    \r\n    # Get addresses\r\n    receiver_addr = await receiver.get_node_addr()\r\n    \r\n    # Create tensor\r\n    tensor_data = TensorData(\r\n        metadata=TensorMetadata(\r\n            shape=[2, 3],\r\n            dtype=\"float32\",\r\n            requires_grad=False\r\n        ),\r\n        data=b\"tensor_bytes_here\"\r\n    )\r\n    \r\n    # Send tensor directly (connection will be pooled)\r\n    await sender.send_tensor_direct(receiver_addr, \"my_tensor\", tensor_data)\r\n    \r\n    # Send again to same peer (connection will be reused)\r\n    await sender.send_tensor_direct(receiver_addr, \"my_tensor2\", tensor_data)\r\n    \r\n    # Check pool size (should be 1 for single peer)\r\n    pool_size = await sender.get_pool_size()\r\n    print(f\"Connection pool size: {pool_size}\")\r\n    \r\n    # Receive tensor\r\n    received = await receiver.receive_tensor()\r\n    print(f\"Received tensor: {received}\")\r\n    \r\n    # Cleanup\r\n    sender.shutdown()\r\n    receiver.shutdown()\r\n\r\nasyncio.run(main())\r\n```\r\n\r\n## Performance Characteristics\r\n\r\n- **Small tensors** (< 1MB): ~1-5ms latency\r\n- **Large tensors** (> 100MB): Limited by network bandwidth\r\n- **Connection reuse**: ~100-500ms saved per subsequent send to same peer\r\n- **Throughput**: Optimized by connection pooling\r\n- **Memory usage**: Minimal buffering, streaming design with intelligent pooling\r\n\r\n## Comprehensive Testing\r\n\r\nThe protocol includes 13 comprehensive stress tests:\r\n\r\n1. **Basic Functionality**: Core tensor send/receive\r\n2. **Pull/Request Pattern**: Control plane operations\r\n3. **Concurrent Sends**: Race condition testing\r\n4. **Rapid Fire Sends**: Timing stress testing\r\n5. **Large Tensor Transfer**: 1MB+ tensor handling\r\n6. **Multiple Receivers**: Broadcast scenarios\r\n7. **Send Before Ready**: Timing edge cases\r\n8. **Immediate Shutdown**: Resource cleanup\r\n9. **Timeout Scenarios**: Network timeout handling\r\n10. **Non-existent Tensor**: Error handling\r\n11. **Bad Ticket Parsing**: Invalid address handling\r\n12. **Post-shutdown Behavior**: Cleanup validation\r\n13. **Connection Pool Reuse**: Pool functionality validation\r\n\r\n\r\n## Error Handling\r\n\r\nThe protocol includes comprehensive error handling:\r\n\r\n- `TensorError::Io`: Network I/O errors\r\n- `TensorError::Serialization`: Data serialization errors  \r\n- `TensorError::Connection`: QUIC connection errors\r\n- `TensorError::Protocol`: Protocol-level errors\r\n\r\n## Thread Safety\r\n\r\n- **Async-aware locking**: Uses `tokio::sync::AsyncMutex` for connection pool\r\n- **Non-blocking operations**: All async operations are non-blocking\r\n- **Concurrent access**: Multiple threads can safely access the connection pool\r\n- **Resource management**: Automatic cleanup of idle connections\r\n\r\n## Future Enhancements\r\n\r\n1. **Compression**: Add tensor compression for network efficiency\r\n2. **Streaming**: Support for tensor streaming (partial sends)\r\n3. **Authentication**: Add peer authentication and authorization\r\n4. **Monitoring**: Add metrics and performance monitoring\r\n5. **Batching**: Support for batched tensor transfers\r\n6. **Pool Metrics**: Connection pool performance monitoring\r\n7. **Adaptive Pooling**: Dynamic pool size based on usage patterns\r\n\r\n## License\r\n\r\nThis implementation is designed to be compatible with Iroh's dual Apache-2.0/MIT license structure. \r\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "High-performance PyO3 bindings for tensor_iroh",
    "version": "0.1.1.dev2",
    "project_urls": {
        "Homepage": "https://github.com/Tandemn-Labs/tensor-iroh",
        "Repository": "https://github.com/Tandemn-Labs/tensor-iroh"
    },
    "split_keywords": [
        "tensor",
        " p2p",
        " iroh",
        " machine-learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6f0f310b3478adf682490160d08cd74d3b033c57804b04c417425af6265d08f4",
                "md5": "245a1d95b71269b9c5424bb3c991430a",
                "sha256": "057a7876698b8dcc687b18c002b5d92cec9419e4cebefe569e7426e9a6dd766e"
            },
            "downloads": -1,
            "filename": "tensor_iroh-0.1.1.dev2-cp38-abi3-manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "245a1d95b71269b9c5424bb3c991430a",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8",
            "size": 7100853,
            "upload_time": "2025-07-22T04:15:27",
            "upload_time_iso_8601": "2025-07-22T04:15:27.676401Z",
            "url": "https://files.pythonhosted.org/packages/6f/0f/310b3478adf682490160d08cd74d3b033c57804b04c417425af6265d08f4/tensor_iroh-0.1.1.dev2-cp38-abi3-manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-22 04:15:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Tandemn-Labs",
    "github_project": "tensor-iroh",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.21.0"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.12.0"
                ]
            ]
        }
    ],
    "lcname": "tensor-iroh"
}
        
Elapsed time: 0.61930s