shared-tensor


Nameshared-tensor JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryA library for sharing GPU memory objects across processes using IPC mechanisms
upload_time2025-09-04 16:33:15
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords gpu memory sharing ipc inter-process-communication pytorch tensorflow cuda model-serving inference distributed-computing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Shared Tensor

[![Python Version](https://img.shields.io/badge/python-3.8%2B-blue.svg)](https://python.org)
[![PyTorch](https://img.shields.io/badge/PyTorch-1.12%2B-orange.svg)](https://pytorch.org)
[![License](https://img.shields.io/badge/license-Apache%202.0-green.svg)](LICENSE)

A high-performance library for sharing GPU memory objects across processes using IPC mechanisms with JSON-RPC 2.0 protocol, enabling model and inference engine separation architecture.

## ๐Ÿš€ Project Overview

Shared Tensor is a cross-process communication library designed specifically for deep learning and AI applications, utilizing IPC mechanisms and JSON-RPC protocol to achieve:

- **Efficient GPU Memory Sharing**: Cross-process sharing of PyTorch tensors and models
- **Remote Function Execution**: Easy remote function calls through decorators
- **Async/Sync Support**: Flexible execution modes for different scenarios
- **Model Serving**: Deploy machine learning models as independent services
- **Distributed Inference**: Support for distributed computing in multi-GPU environments

## ๐Ÿ“‹ Core Features

### ๐Ÿ”„ Cross-Process Communication
- **JSON-RPC 2.0 Protocol**: Standardized remote procedure calls
- **HTTP Transport**: Reliable HTTP-based communication mechanism
- **Serialization Optimization**: Efficient PyTorch object serialization/deserialization

### ๐ŸŽฏ Function Sharing
- **Decorator Pattern**: Easy function sharing using `@provider.share`
- **Auto Discovery**: Smart function path resolution and import
- **Parameter Passing**: Support for complex data type parameters

### โšก Async Support
- **Async Execution**: `AsyncSharedTensorProvider` supports non-blocking calls
- **Task Management**: Complete async task status tracking
- **Concurrent Processing**: Efficient concurrent request handling

### ๐Ÿ–ฅ๏ธ GPU Compatibility
- **CUDA Support**: Native CUDA tensor sharing support
- **Device Management**: Smart data migration between devices
- **Memory Optimization**: Efficient GPU memory usage

## ๐Ÿ› ๏ธ Installation Guide

### Requirements

- **Python**: 3.8+
- **Operating System**: Linux (recommended)
- **PyTorch**: 1.12.0+
- **CUDA**: Optional, for GPU support

### Installation Methods

#### Install from Pypi

```bash
pip install shared-tensor
```

#### Install from Source

```bash
# Clone the repository
git clone https://github.com/world-sim-dev/shared-tensor.git
cd shared-tensor

# Install dependencies
pip install -r requirements.txt

# Install the package
pip install -e .
```

#### Development Installation

```bash
# Install with development dependencies
pip install -e ".[dev]"

# Install with test dependencies
pip install -e ".[test]"
```

### Verify Installation

```bash
# Check core functionality
python -c "import shared_tensor; print('โœ“ Shared Tensor installed successfully')"
```

## ๐ŸŽฏ Quick Start

### 1. Basic Function Sharing

```python
from shared_tensor.async_provider import AsyncSharedTensorProvider

# Create provider
provider = AsyncSharedTensorProvider()

# Share simple function
@provider.share()
def add_numbers(a, b):
    return a + b

# Share PyTorch function
@provider.share()
def create_tensor(shape):
    import torch
    return torch.zeros(shape)

# Load PyTorch model
@provider.share()
def load_model():
    ...

```

### 2. Start Server

```bash
# Method 1: Use command line tool, single server
shared-tensor-server

# Method 2: Use torchrun
torchrun --nproc_per_node=4 --no-python shared-tensor-server

# Method 3: Custom configuration
python shared_tensor/server.py
```


## ๐Ÿ“– Detailed Usage

### Model Sharing Example

```python
import torch
import torch.nn as nn

from shared_tensor.async_provider import AsyncSharedTensorProvider

# Create provider
provider = AsyncSharedTensorProvider()

# Define model
class SimpleNet(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super().__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_size, output_size)
    
    def forward(self, x):
        x = self.fc1(x)
        x = self.relu(x)
        x = self.fc2(x)
        return x

# Share model creation function
@provider.share(name="create_model")
def create_model(input_size=784, hidden_size=128, output_size=10):
    model = SimpleNet(input_size, hidden_size, output_size)
    return model

# Share inference function
model = create_model()
with torch.no_grad():
    model(input_data)
```


## ๐Ÿ”ง Configuration Options

### Server Configuration

```python
provider = AsyncSharedTensorProvider(
    server_port: int = 2537 + global_rank,    # Local Http Server Port
    verbose_debug: bool = False,              # Logging more detailed params
    poll_interval: float = 1.0,               # Check status interval only for Async provider
    default_enabled: bool = True              # Whether enable shared-tenser and re-enable via env `export __SHARED_TENSOR_ENABLED__=true`
)

@provider.share(
    name: Optional[str] = None,               # name for logging and debug, when singleton enabled, as default cache key
    wait: bool = True,                        # whether return func return or a async handler
    singleton: bool = True,                   # whether maintain only one instance of func result
    singleton_key_formatter: Optional[str] = None): # python template can be formatted by user function params, act as final cache key
def get_demo_model():
    ...
```

## ๐Ÿงช Testing

### Run Test Suite

```bash
# Run all tests
python tests/run_tests.py

# Run specific category tests
python tests/run_tests.py --category unit
python tests/run_tests.py --category integration
python tests/run_tests.py --category pytorch

# Run only PyTorch related tests
python tests/run_tests.py --torch-only

# Verbose output
python tests/run_tests.py --verbose
```

### Test Environment Info

```bash
# Check test environment
python tests/run_tests.py --env-info
```

### Individual Test Files

```bash
# Test tensor serialization
python tests/pytorch_tests/test_tensor_serialization.py

# Test async system
python tests/integration/test_async_system.py

# Test client
python tests/integration/test_client.py
```

## ๐Ÿ—๏ธ Architecture Design

### Core Components

```
shared-tensor/
โ”œโ”€โ”€ shared_tensor/              # Core modules
โ”‚   โ”œโ”€โ”€ server.py              # JSON-RPC server
โ”‚   โ”œโ”€โ”€ client.py              # Sync client
โ”‚   โ”œโ”€โ”€ provider.py            # Sync provider
โ”‚   โ”œโ”€โ”€ async_client.py        # Async client
โ”‚   โ”œโ”€โ”€ async_provider.py      # Async provider
โ”‚   โ”œโ”€โ”€ async_task.py          # Async task management
โ”‚   โ”œโ”€โ”€ jsonrpc.py            # JSON-RPC protocol implementation
โ”‚   โ”œโ”€โ”€ utils.py              # Utility functions
โ”‚   โ””โ”€โ”€ errors.py             # Exception definitions
โ”œโ”€โ”€ examples/                  # Usage examples
โ””โ”€โ”€ tests/                     # Test suite
```

### Communication Flow

```mermaid
sequenceDiagram
    participant CA as Client App
    participant SC as SharedTensorClient
    participant SS as SharedTensorServer
    participant FE as Function Executor
    
    Note over CA, FE: Client-Server Communication Flow
    
    CA->>SC: call_function("model_inference", args)
    SC->>SC: Serialize parameters
    SC->>SS: HTTP POST /jsonrpc<br/>JSON-RPC Request
    
    Note over SS: Server Processing
    SS->>SS: Parse JSON-RPC request
    SS->>SS: Resolve function path
    SS->>FE: Import & execute function
    FE->>FE: Deserialize parameters
    FE->>FE: Execute function logic
    FE->>SS: Return execution result
    
    Note over SS: Response Preparation
    SS->>SS: Serialize result
    SS->>SS: Create JSON-RPC response
    SS->>SC: HTTP Response<br/>JSON-RPC Result
    
    Note over SC: Client Processing
    SC->>SC: Parse response
    SC->>SC: Deserialize result
    SC->>CA: Return final result
    
    Note over CA, FE: End-to-End Process Complete
```

### Debug Tips

1. **Enable verbose logging**:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
```

2. **Use debug mode**:
```python
provider = SharedTensorProvider(verbose_debug=True)
```

3. **Check function paths**:
```python
provider = SharedTensorProvider()
print(provider._registered_functions)
```

## ๐Ÿค Contributing

We welcome community contributions! Please follow these steps:

### Development Environment Setup

```bash
# Clone repository
git clone https://github.com/world-sim-dev/shared-tensor.git
cd shared-tensor

# Create virtual environment
python -m venv venv
source venv/bin/activate

# Install development dependencies
pip install -e ".[dev]"

# Install pre-commit hooks
pre-commit install

# Package & Publish
python -m pip install build
python -m build --sdist --wheel
python -m twine upload --repository testpypi dist/*
python -m twine upload dist/*
```

### Code Standards

```bash
# Code formatting
black shared_tensor/ tests/ examples/

# Import sorting
isort shared_tensor/ tests/ examples/

# Static checking
flake8 shared_tensor/
mypy shared_tensor/
```

### Submission Process

1. Fork the project and create a feature branch
2. Write code and tests
3. Run the complete test suite
4. Submit a Pull Request

### Test Requirements

- New features must include tests
- Maintain test coverage > 90%
- All tests must pass

## ๐Ÿ“„ License

This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details

## ๐Ÿ™ Acknowledgments

- [PyTorch](https://pytorch.org/) - Deep learning framework
- [JSON-RPC 2.0](https://www.jsonrpc.org/) - Remote procedure call protocol

## ๐Ÿ“ž Contact Us

- **Issues**: [GitHub Issues](https://github.com/world-sim-dev/shared-tensor/issues)
- **Documentation**: [Shared Tensor Documentation](https://github.com/world-sim-dev/shared-tensor/wiki)
- **Source**: [GitHub Repository](https://github.com/world-sim-dev/shared-tensor)

---

**Shared Tensor** - Making GPU memory sharing simple and efficient ๐Ÿš€

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "shared-tensor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Athena Team <contact@world-sim-dev.org>",
    "keywords": "gpu, memory, sharing, ipc, inter-process-communication, pytorch, tensorflow, cuda, model-serving, inference, distributed-computing",
    "author": null,
    "author_email": "Athena Team <contact@world-sim-dev.org>",
    "download_url": "https://files.pythonhosted.org/packages/ce/07/15dda8e716fb773827c301e61a3fabfbe850c9764b21c83fe136e8c04af9/shared_tensor-0.1.2.tar.gz",
    "platform": null,
    "description": "# Shared Tensor\n\n[![Python Version](https://img.shields.io/badge/python-3.8%2B-blue.svg)](https://python.org)\n[![PyTorch](https://img.shields.io/badge/PyTorch-1.12%2B-orange.svg)](https://pytorch.org)\n[![License](https://img.shields.io/badge/license-Apache%202.0-green.svg)](LICENSE)\n\nA high-performance library for sharing GPU memory objects across processes using IPC mechanisms with JSON-RPC 2.0 protocol, enabling model and inference engine separation architecture.\n\n## \ud83d\ude80 Project Overview\n\nShared Tensor is a cross-process communication library designed specifically for deep learning and AI applications, utilizing IPC mechanisms and JSON-RPC protocol to achieve:\n\n- **Efficient GPU Memory Sharing**: Cross-process sharing of PyTorch tensors and models\n- **Remote Function Execution**: Easy remote function calls through decorators\n- **Async/Sync Support**: Flexible execution modes for different scenarios\n- **Model Serving**: Deploy machine learning models as independent services\n- **Distributed Inference**: Support for distributed computing in multi-GPU environments\n\n## \ud83d\udccb Core Features\n\n### \ud83d\udd04 Cross-Process Communication\n- **JSON-RPC 2.0 Protocol**: Standardized remote procedure calls\n- **HTTP Transport**: Reliable HTTP-based communication mechanism\n- **Serialization Optimization**: Efficient PyTorch object serialization/deserialization\n\n### \ud83c\udfaf Function Sharing\n- **Decorator Pattern**: Easy function sharing using `@provider.share`\n- **Auto Discovery**: Smart function path resolution and import\n- **Parameter Passing**: Support for complex data type parameters\n\n### \u26a1 Async Support\n- **Async Execution**: `AsyncSharedTensorProvider` supports non-blocking calls\n- **Task Management**: Complete async task status tracking\n- **Concurrent Processing**: Efficient concurrent request handling\n\n### \ud83d\udda5\ufe0f GPU Compatibility\n- **CUDA Support**: Native CUDA tensor sharing support\n- **Device Management**: Smart data migration between devices\n- **Memory Optimization**: Efficient GPU memory usage\n\n## \ud83d\udee0\ufe0f Installation Guide\n\n### Requirements\n\n- **Python**: 3.8+\n- **Operating System**: Linux (recommended)\n- **PyTorch**: 1.12.0+\n- **CUDA**: Optional, for GPU support\n\n### Installation Methods\n\n#### Install from Pypi\n\n```bash\npip install shared-tensor\n```\n\n#### Install from Source\n\n```bash\n# Clone the repository\ngit clone https://github.com/world-sim-dev/shared-tensor.git\ncd shared-tensor\n\n# Install dependencies\npip install -r requirements.txt\n\n# Install the package\npip install -e .\n```\n\n#### Development Installation\n\n```bash\n# Install with development dependencies\npip install -e \".[dev]\"\n\n# Install with test dependencies\npip install -e \".[test]\"\n```\n\n### Verify Installation\n\n```bash\n# Check core functionality\npython -c \"import shared_tensor; print('\u2713 Shared Tensor installed successfully')\"\n```\n\n## \ud83c\udfaf Quick Start\n\n### 1. Basic Function Sharing\n\n```python\nfrom shared_tensor.async_provider import AsyncSharedTensorProvider\n\n# Create provider\nprovider = AsyncSharedTensorProvider()\n\n# Share simple function\n@provider.share()\ndef add_numbers(a, b):\n    return a + b\n\n# Share PyTorch function\n@provider.share()\ndef create_tensor(shape):\n    import torch\n    return torch.zeros(shape)\n\n# Load PyTorch model\n@provider.share()\ndef load_model():\n    ...\n\n```\n\n### 2. Start Server\n\n```bash\n# Method 1: Use command line tool, single server\nshared-tensor-server\n\n# Method 2: Use torchrun\ntorchrun --nproc_per_node=4 --no-python shared-tensor-server\n\n# Method 3: Custom configuration\npython shared_tensor/server.py\n```\n\n\n## \ud83d\udcd6 Detailed Usage\n\n### Model Sharing Example\n\n```python\nimport torch\nimport torch.nn as nn\n\nfrom shared_tensor.async_provider import AsyncSharedTensorProvider\n\n# Create provider\nprovider = AsyncSharedTensorProvider()\n\n# Define model\nclass SimpleNet(nn.Module):\n    def __init__(self, input_size, hidden_size, output_size):\n        super().__init__()\n        self.fc1 = nn.Linear(input_size, hidden_size)\n        self.relu = nn.ReLU()\n        self.fc2 = nn.Linear(hidden_size, output_size)\n    \n    def forward(self, x):\n        x = self.fc1(x)\n        x = self.relu(x)\n        x = self.fc2(x)\n        return x\n\n# Share model creation function\n@provider.share(name=\"create_model\")\ndef create_model(input_size=784, hidden_size=128, output_size=10):\n    model = SimpleNet(input_size, hidden_size, output_size)\n    return model\n\n# Share inference function\nmodel = create_model()\nwith torch.no_grad():\n    model(input_data)\n```\n\n\n## \ud83d\udd27 Configuration Options\n\n### Server Configuration\n\n```python\nprovider = AsyncSharedTensorProvider(\n    server_port: int = 2537 + global_rank,    # Local Http Server Port\n    verbose_debug: bool = False,              # Logging more detailed params\n    poll_interval: float = 1.0,               # Check status interval only for Async provider\n    default_enabled: bool = True              # Whether enable shared-tenser and re-enable via env `export __SHARED_TENSOR_ENABLED__=true`\n)\n\n@provider.share(\n    name: Optional[str] = None,               # name for logging and debug, when singleton enabled, as default cache key\n    wait: bool = True,                        # whether return func return or a async handler\n    singleton: bool = True,                   # whether maintain only one instance of func result\n    singleton_key_formatter: Optional[str] = None): # python template can be formatted by user function params, act as final cache key\ndef get_demo_model():\n    ...\n```\n\n## \ud83e\uddea Testing\n\n### Run Test Suite\n\n```bash\n# Run all tests\npython tests/run_tests.py\n\n# Run specific category tests\npython tests/run_tests.py --category unit\npython tests/run_tests.py --category integration\npython tests/run_tests.py --category pytorch\n\n# Run only PyTorch related tests\npython tests/run_tests.py --torch-only\n\n# Verbose output\npython tests/run_tests.py --verbose\n```\n\n### Test Environment Info\n\n```bash\n# Check test environment\npython tests/run_tests.py --env-info\n```\n\n### Individual Test Files\n\n```bash\n# Test tensor serialization\npython tests/pytorch_tests/test_tensor_serialization.py\n\n# Test async system\npython tests/integration/test_async_system.py\n\n# Test client\npython tests/integration/test_client.py\n```\n\n## \ud83c\udfd7\ufe0f Architecture Design\n\n### Core Components\n\n```\nshared-tensor/\n\u251c\u2500\u2500 shared_tensor/              # Core modules\n\u2502   \u251c\u2500\u2500 server.py              # JSON-RPC server\n\u2502   \u251c\u2500\u2500 client.py              # Sync client\n\u2502   \u251c\u2500\u2500 provider.py            # Sync provider\n\u2502   \u251c\u2500\u2500 async_client.py        # Async client\n\u2502   \u251c\u2500\u2500 async_provider.py      # Async provider\n\u2502   \u251c\u2500\u2500 async_task.py          # Async task management\n\u2502   \u251c\u2500\u2500 jsonrpc.py            # JSON-RPC protocol implementation\n\u2502   \u251c\u2500\u2500 utils.py              # Utility functions\n\u2502   \u2514\u2500\u2500 errors.py             # Exception definitions\n\u251c\u2500\u2500 examples/                  # Usage examples\n\u2514\u2500\u2500 tests/                     # Test suite\n```\n\n### Communication Flow\n\n```mermaid\nsequenceDiagram\n    participant CA as Client App\n    participant SC as SharedTensorClient\n    participant SS as SharedTensorServer\n    participant FE as Function Executor\n    \n    Note over CA, FE: Client-Server Communication Flow\n    \n    CA->>SC: call_function(\"model_inference\", args)\n    SC->>SC: Serialize parameters\n    SC->>SS: HTTP POST /jsonrpc<br/>JSON-RPC Request\n    \n    Note over SS: Server Processing\n    SS->>SS: Parse JSON-RPC request\n    SS->>SS: Resolve function path\n    SS->>FE: Import & execute function\n    FE->>FE: Deserialize parameters\n    FE->>FE: Execute function logic\n    FE->>SS: Return execution result\n    \n    Note over SS: Response Preparation\n    SS->>SS: Serialize result\n    SS->>SS: Create JSON-RPC response\n    SS->>SC: HTTP Response<br/>JSON-RPC Result\n    \n    Note over SC: Client Processing\n    SC->>SC: Parse response\n    SC->>SC: Deserialize result\n    SC->>CA: Return final result\n    \n    Note over CA, FE: End-to-End Process Complete\n```\n\n### Debug Tips\n\n1. **Enable verbose logging**:\n```python\nimport logging\nlogging.basicConfig(level=logging.DEBUG)\n```\n\n2. **Use debug mode**:\n```python\nprovider = SharedTensorProvider(verbose_debug=True)\n```\n\n3. **Check function paths**:\n```python\nprovider = SharedTensorProvider()\nprint(provider._registered_functions)\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome community contributions! Please follow these steps:\n\n### Development Environment Setup\n\n```bash\n# Clone repository\ngit clone https://github.com/world-sim-dev/shared-tensor.git\ncd shared-tensor\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate\n\n# Install development dependencies\npip install -e \".[dev]\"\n\n# Install pre-commit hooks\npre-commit install\n\n# Package & Publish\npython -m pip install build\npython -m build --sdist --wheel\npython -m twine upload --repository testpypi dist/*\npython -m twine upload dist/*\n```\n\n### Code Standards\n\n```bash\n# Code formatting\nblack shared_tensor/ tests/ examples/\n\n# Import sorting\nisort shared_tensor/ tests/ examples/\n\n# Static checking\nflake8 shared_tensor/\nmypy shared_tensor/\n```\n\n### Submission Process\n\n1. Fork the project and create a feature branch\n2. Write code and tests\n3. Run the complete test suite\n4. Submit a Pull Request\n\n### Test Requirements\n\n- New features must include tests\n- Maintain test coverage > 90%\n- All tests must pass\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details\n\n## \ud83d\ude4f Acknowledgments\n\n- [PyTorch](https://pytorch.org/) - Deep learning framework\n- [JSON-RPC 2.0](https://www.jsonrpc.org/) - Remote procedure call protocol\n\n## \ud83d\udcde Contact Us\n\n- **Issues**: [GitHub Issues](https://github.com/world-sim-dev/shared-tensor/issues)\n- **Documentation**: [Shared Tensor Documentation](https://github.com/world-sim-dev/shared-tensor/wiki)\n- **Source**: [GitHub Repository](https://github.com/world-sim-dev/shared-tensor)\n\n---\n\n**Shared Tensor** - Making GPU memory sharing simple and efficient \ud83d\ude80\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A library for sharing GPU memory objects across processes using IPC mechanisms",
    "version": "0.1.2",
    "project_urls": {
        "Bug Reports": "https://github.com/world-sim-dev/shared-tensor/issues",
        "Changelog": "https://github.com/world-sim-dev/shared-tensor/releases",
        "Documentation": "https://github.com/world-sim-dev/shared-tensor/wiki",
        "Homepage": "https://github.com/world-sim-dev/shared-tensor",
        "Repository": "https://github.com/world-sim-dev/shared-tensor"
    },
    "split_keywords": [
        "gpu",
        " memory",
        " sharing",
        " ipc",
        " inter-process-communication",
        " pytorch",
        " tensorflow",
        " cuda",
        " model-serving",
        " inference",
        " distributed-computing"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "211b82f57c909f44ae23a96f66fe3e5eba2d634f2e3a26b87b86c31d556e15bc",
                "md5": "b6d708e8a9888a261b409d8daf4c1171",
                "sha256": "06f18d4adc3f30e5ea3158b4ba6950cf2fadde76749ace8383cd08faf0d96f29"
            },
            "downloads": -1,
            "filename": "shared_tensor-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b6d708e8a9888a261b409d8daf4c1171",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 50089,
            "upload_time": "2025-09-04T16:33:12",
            "upload_time_iso_8601": "2025-09-04T16:33:12.108964Z",
            "url": "https://files.pythonhosted.org/packages/21/1b/82f57c909f44ae23a96f66fe3e5eba2d634f2e3a26b87b86c31d556e15bc/shared_tensor-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ce0715dda8e716fb773827c301e61a3fabfbe850c9764b21c83fe136e8c04af9",
                "md5": "bc517254b1332334a85337794642487a",
                "sha256": "30def469c0df9b38ca856cb69046ee387319272b491be77d27679f975714ae8f"
            },
            "downloads": -1,
            "filename": "shared_tensor-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "bc517254b1332334a85337794642487a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 25868,
            "upload_time": "2025-09-04T16:33:15",
            "upload_time_iso_8601": "2025-09-04T16:33:15.030980Z",
            "url": "https://files.pythonhosted.org/packages/ce/07/15dda8e716fb773827c301e61a3fabfbe850c9764b21c83fe136e8c04af9/shared_tensor-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-04 16:33:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "world-sim-dev",
    "github_project": "shared-tensor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "shared-tensor"
}
        
Elapsed time: 0.43834s