Name | tensorcontainer JSON |
Version |
0.6.1
JSON |
| download |
home_page | None |
Summary | TensorDict-like functionality for PyTorch with PyTree compatibility and torch.compile support |
upload_time | 2025-07-19 22:06:54 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT |
keywords |
deep learning
tensordict
pytorch
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Tensor Container
*Tensor containers for PyTorch with PyTree compatibility and torch.compile optimization*
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pytorch.org/)
> **⚠️ Academic Research Project**: This project exists solely for academic purposes to explore and learn PyTorch internals. For production use, please use the official, well-maintained [**torch/tensordict**](https://github.com/pytorch/tensordict) library.
Tensor Container provides efficient, type-safe tensor container implementations for PyTorch workflows. It includes PyTree integration and torch.compile optimization for batched tensor operations.
The library includes tensor containers, probabilistic distributions, and batch/event dimension semantics for machine learning workflows.
## What is TensorContainer?
TensorContainer transforms how you work with structured tensor data in PyTorch by providing **tensor-like operations for entire data structures**. Instead of manually managing individual tensors across devices, batch dimensions, and nested hierarchies, TensorContainer lets you treat complex data as unified entities that behave just like regular tensors.
### 🚀 **Unified Operations Across Data Types**
Apply tensor operations like `view()`, `permute()`, `detach()`, and device transfers to entire data structures—no matter how complex:
```python
# Single operation transforms entire distribution
distribution = distribution.view(2, 3, 4).permute(1, 0, 2).detach()
# Works seamlessly across TensorDict, TensorDataClass, and TensorDistribution
data = data.to('cuda').reshape(batch_size, -1).clone()
```
### 🔄 **Drop-in Compatibility with PyTorch**
TensorContainer integrates seamlessly with existing PyTorch workflows:
- **torch.distributions compatibility**: TensorDistribution is API-compatible with `torch.distributions` while adding tensor-like operations
- **PyTree support**: All containers work with `torch.utils._pytree` operations and `torch.compile`
- **Zero learning curve**: If you know PyTorch tensors, you already know TensorContainer
### ⚡ **Eliminates Boilerplate Code**
Compare the complexity difference:
**With torch.distributions** (manual parameter handling):
```python
# Requires type-specific parameter extraction and reconstruction
if isinstance(dist, Normal):
detached = Normal(loc=dist.loc.detach(), scale=dist.scale.detach())
elif isinstance(dist, Categorical):
detached = Categorical(logits=dist.logits.detach())
# ... more type checks needed
```
**With TensorDistribution** (unified interface):
```python
# Works for any distribution type
detached = dist.detach()
```
### 🏗️ **Structured Data Made Simple**
Handle complex, nested tensor structures with the same ease as single tensors:
- **Batch semantics**: Consistent shape handling across all nested tensors
- **Device management**: Move entire structures between CPU/GPU with single operations
- **Shape validation**: Automatic verification of tensor compatibility
- **Type safety**: Full IDE support with static typing and autocomplete
TensorContainer doesn't just store your data—it makes working with structured tensors as intuitive as working with individual tensors, while maintaining full compatibility with the PyTorch ecosystem you already know.
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Features](#features)
- [API Overview](#api-overview)
- [torch.compile Compatibility](#torchcompile-compatibility)
- [Contributing](#contributing)
- [Documentation](#documentation)
- [License](#license)
- [Authors](#authors)
- [Contact and Support](#contact-and-support)
## Installation
### From Source (Development)
```bash
# Clone the repository
git clone https://github.com/mctigger/tensor-container.git
cd tensor-container
# Install in development mode
pip install -e .
# Install with development dependencies
pip install -e .[dev]
```
### Requirements
- Python 3.9+
- PyTorch 2.0+
## Quick Start
### TensorDict: Dictionary-Style Containers
```python
import torch
from tensorcontainer import TensorDict
# Create a TensorDict with batch semantics
data = TensorDict({
'observations': torch.randn(32, 128),
'actions': torch.randn(32, 4),
'rewards': torch.randn(32, 1)
}, shape=(32,), device='cpu')
# Dictionary-like access
obs = data['observations']
data['new_field'] = torch.zeros(32, 10)
# Batch operations work seamlessly
stacked_data = torch.stack([data, data]) # Shape: (2, 32)
```
### TensorDataClass: Type-Safe Containers
```python
import torch
from tensorcontainer import TensorDataClass
class RLData(TensorDataClass):
observations: torch.Tensor
actions: torch.Tensor
rewards: torch.Tensor
# Create with full type safety and IDE support
data = RLData(
observations=torch.randn(32, 128),
actions=torch.randn(32, 4),
rewards=torch.randn(32, 1),
shape=(32,),
device='cpu'
)
# Type-safe field access with autocomplete
obs = data.observations
data.actions = torch.randn(32, 8) # Type-checked assignment
```
### TensorDistribution: Probabilistic Containers
```python
import torch
from tensorcontainer import TensorDistribution
# Built-in distribution types
from tensorcontainer.tensor_distribution import (
TensorNormal, TensorBernoulli, TensorCategorical,
TensorTruncatedNormal, TensorTanhNormal
)
# Create probabilistic tensor containers
normal_dist = TensorNormal(
loc=torch.zeros(32, 4),
scale=torch.ones(32, 4),
shape=(32,),
device='cpu'
)
# Sample and compute probabilities
samples = normal_dist.sample() # Shape: (32, 4)
log_probs = normal_dist.log_prob(samples)
entropy = normal_dist.entropy()
# Categorical distributions for discrete actions
categorical = TensorCategorical(
logits=torch.randn(32, 6), # 6 possible actions
shape=(32,),
device='cpu'
)
```
### PyTree Operations
```python
# All containers work seamlessly with PyTree operations
import torch.utils._pytree as pytree
# Transform all tensors in the container
doubled_data = pytree.tree_map(lambda x: x * 2, data)
# Combine multiple containers
combined = pytree.tree_map(lambda x, y: x + y, data1, data2)
```
## Features
- **torch.compile Optimized**: Compatible with PyTorch's JIT compiler
- **PyTree Support**: Integration with `torch.utils._pytree` for tree operations
- **Zero-Copy Operations**: Efficient tensor sharing and manipulation
- **Type Safety**: Static typing support with IDE autocomplete and type checking
- **Batch Semantics**: Consistent batch/event dimension handling
- **Shape Validation**: Automatic validation of tensor shapes and device consistency
- **Multiple Container Types**: Different container types for different use cases
- **Probabilistic Support**: Distribution containers for probabilistic modeling
- **Comprehensive Testing**: Extensive test suite with compile compatibility verification
- **Memory Efficient**: Optimized memory usage with slots-based dataclasses
## API Overview
### Core Components
- **`TensorContainer`**: Base class providing core tensor manipulation operations with batch/event dimension semantics
- **`TensorDict`**: Dictionary-like container for dynamic tensor collections with nested structure support
- **`TensorDataClass`**: DataClass-based container for static, typed tensor structures
- **`TensorDistribution`**: Distribution wrapper for probabilistic tensor operations
### Key Concepts
- **Batch Dimensions**: Leading dimensions defined by the `shape` parameter, consistent across all tensors
- **Event Dimensions**: Trailing dimensions beyond batch shape, can vary per tensor
- **PyTree Integration**: All containers are registered PyTree nodes for seamless tree operations
- **Device Consistency**: Automatic validation ensures all tensors reside on compatible devices
- **Unsafe Construction**: Context manager for performance-critical scenarios with validation bypass
## torch.compile Compatibility
Tensor Container is designed for `torch.compile` compatibility:
```python
@torch.compile
def process_batch(data: TensorDict) -> TensorDict:
# PyTree operations compile efficiently
return TensorContainer._tree_map(lambda x: torch.relu(x), data)
@torch.compile
def sample_and_score(dist: TensorNormal, actions: torch.Tensor) -> torch.Tensor:
# Distribution operations are compile-safe
return dist.log_prob(actions)
# All operations compile efficiently with minimal graph breaks
compiled_result = process_batch(tensor_dict)
log_probs = sample_and_score(normal_dist, action_tensor)
```
The testing framework includes compile compatibility verification to ensure operations work efficiently under JIT compilation, including:
- Graph break detection and minimization
- Recompilation tracking
- Memory leak prevention
- Performance benchmarking
## Contributing
Contributions are welcome! Tensor Container is a learning project for exploring PyTorch internals and tensor container implementations.
### Development Setup
```bash
# Clone and install in development mode
git clone https://github.com/mctigger/tensor-container.git
cd tensor-container
pip install -e .[dev]
```
### Running Tests
```bash
# Run all tests with coverage
pytest --strict-markers --cov=src
# Run specific test modules
pytest tests/tensor_dict/test_compile.py
pytest tests/tensor_dataclass/
pytest tests/tensor_distribution/
# Run compile-specific tests
pytest tests/tensor_dict/test_graph_breaks.py
pytest tests/tensor_dict/test_recompilations.py
```
### Development Guidelines
- All new features must maintain `torch.compile` compatibility
- Comprehensive tests required, including compile compatibility verification
- Follow existing code patterns and typing conventions
- Distribution implementations must support KL divergence registration
- Memory efficiency considerations for large-scale tensor operations
- Unsafe construction patterns for performance-critical paths
### Contribution Process
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes with appropriate tests
4. Ensure all tests pass and maintain coverage
5. Submit a pull request with a clear description
## Documentation
The project includes documentation:
- **[`docs/compatibility.md`](docs/compatibility.md)**: Python version compatibility guide and best practices
- **[`docs/testing.md`](docs/testing.md)**: Testing philosophy, standards, and guidelines
- **Source Code Documentation**: Extensive docstrings and type annotations throughout the codebase
- **Test Coverage**: 643+ tests covering all major functionality with 86% code coverage
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Authors
- **Tim Joseph** - [mctigger](https://github.com/mctigger)
## Contact and Support
- **Issues**: Report bugs and request features on [GitHub Issues](https://github.com/mctigger/tensor-container/issues)
- **Discussions**: Join conversations on [GitHub Discussions](https://github.com/mctigger/tensor-container/discussions)
- **Email**: For direct inquiries, contact [tim@mctigger.com](mailto:tim@mctigger.com)
---
*Tensor Container is an academic research project for learning PyTorch internals and tensor container patterns. For production applications, we strongly recommend using the official [torch/tensordict](https://github.com/pytorch/tensordict) library, which is actively maintained by the PyTorch team.*
Raw data
{
"_id": null,
"home_page": null,
"name": "tensorcontainer",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "deep learning, tensordict, pytorch",
"author": null,
"author_email": "Tim Joseph <tim@mctigger.com>",
"download_url": "https://files.pythonhosted.org/packages/6f/aa/3e8debd6e89edb48b3a0d459b3ba64bd9121ed661f432a4a44f6f94fbd3e/tensorcontainer-0.6.1.tar.gz",
"platform": null,
"description": "# Tensor Container\n\n*Tensor containers for PyTorch with PyTree compatibility and torch.compile optimization*\n\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](https://pytorch.org/)\n\n> **\u26a0\ufe0f Academic Research Project**: This project exists solely for academic purposes to explore and learn PyTorch internals. For production use, please use the official, well-maintained [**torch/tensordict**](https://github.com/pytorch/tensordict) library.\n\nTensor Container provides efficient, type-safe tensor container implementations for PyTorch workflows. It includes PyTree integration and torch.compile optimization for batched tensor operations.\n\nThe library includes tensor containers, probabilistic distributions, and batch/event dimension semantics for machine learning workflows.\n\n## What is TensorContainer?\n\nTensorContainer transforms how you work with structured tensor data in PyTorch by providing **tensor-like operations for entire data structures**. Instead of manually managing individual tensors across devices, batch dimensions, and nested hierarchies, TensorContainer lets you treat complex data as unified entities that behave just like regular tensors.\n\n### \ud83d\ude80 **Unified Operations Across Data Types**\n\nApply tensor operations like `view()`, `permute()`, `detach()`, and device transfers to entire data structures\u2014no matter how complex:\n\n```python\n# Single operation transforms entire distribution\ndistribution = distribution.view(2, 3, 4).permute(1, 0, 2).detach()\n\n# Works seamlessly across TensorDict, TensorDataClass, and TensorDistribution\ndata = data.to('cuda').reshape(batch_size, -1).clone()\n```\n\n### \ud83d\udd04 **Drop-in Compatibility with PyTorch**\n\nTensorContainer integrates seamlessly with existing PyTorch workflows:\n- **torch.distributions compatibility**: TensorDistribution is API-compatible with `torch.distributions` while adding tensor-like operations\n- **PyTree support**: All containers work with `torch.utils._pytree` operations and `torch.compile`\n- **Zero learning curve**: If you know PyTorch tensors, you already know TensorContainer\n\n### \u26a1 **Eliminates Boilerplate Code**\n\nCompare the complexity difference:\n\n**With torch.distributions** (manual parameter handling):\n```python\n# Requires type-specific parameter extraction and reconstruction\nif isinstance(dist, Normal):\n detached = Normal(loc=dist.loc.detach(), scale=dist.scale.detach())\nelif isinstance(dist, Categorical):\n detached = Categorical(logits=dist.logits.detach())\n# ... more type checks needed\n```\n\n**With TensorDistribution** (unified interface):\n```python\n# Works for any distribution type\ndetached = dist.detach()\n```\n\n### \ud83c\udfd7\ufe0f **Structured Data Made Simple**\n\nHandle complex, nested tensor structures with the same ease as single tensors:\n- **Batch semantics**: Consistent shape handling across all nested tensors\n- **Device management**: Move entire structures between CPU/GPU with single operations\n- **Shape validation**: Automatic verification of tensor compatibility\n- **Type safety**: Full IDE support with static typing and autocomplete\n\nTensorContainer doesn't just store your data\u2014it makes working with structured tensors as intuitive as working with individual tensors, while maintaining full compatibility with the PyTorch ecosystem you already know.\n\n## Table of Contents\n\n- [Installation](#installation)\n- [Quick Start](#quick-start)\n- [Features](#features)\n- [API Overview](#api-overview)\n- [torch.compile Compatibility](#torchcompile-compatibility)\n- [Contributing](#contributing)\n- [Documentation](#documentation)\n- [License](#license)\n- [Authors](#authors)\n- [Contact and Support](#contact-and-support)\n\n## Installation\n\n### From Source (Development)\n\n```bash\n# Clone the repository\ngit clone https://github.com/mctigger/tensor-container.git\ncd tensor-container\n\n# Install in development mode\npip install -e .\n\n# Install with development dependencies\npip install -e .[dev]\n```\n\n### Requirements\n\n- Python 3.9+\n- PyTorch 2.0+\n\n## Quick Start\n\n### TensorDict: Dictionary-Style Containers\n\n```python\nimport torch\nfrom tensorcontainer import TensorDict\n\n# Create a TensorDict with batch semantics\ndata = TensorDict({\n 'observations': torch.randn(32, 128),\n 'actions': torch.randn(32, 4),\n 'rewards': torch.randn(32, 1)\n}, shape=(32,), device='cpu')\n\n# Dictionary-like access\nobs = data['observations']\ndata['new_field'] = torch.zeros(32, 10)\n\n# Batch operations work seamlessly\nstacked_data = torch.stack([data, data]) # Shape: (2, 32)\n```\n\n### TensorDataClass: Type-Safe Containers\n\n```python\nimport torch\nfrom tensorcontainer import TensorDataClass\n\nclass RLData(TensorDataClass):\n observations: torch.Tensor\n actions: torch.Tensor\n rewards: torch.Tensor\n\n# Create with full type safety and IDE support\ndata = RLData(\n observations=torch.randn(32, 128),\n actions=torch.randn(32, 4),\n rewards=torch.randn(32, 1),\n shape=(32,),\n device='cpu'\n)\n\n# Type-safe field access with autocomplete\nobs = data.observations\ndata.actions = torch.randn(32, 8) # Type-checked assignment\n```\n\n### TensorDistribution: Probabilistic Containers\n\n```python\nimport torch\nfrom tensorcontainer import TensorDistribution\n\n# Built-in distribution types\nfrom tensorcontainer.tensor_distribution import (\n TensorNormal, TensorBernoulli, TensorCategorical,\n TensorTruncatedNormal, TensorTanhNormal\n)\n\n# Create probabilistic tensor containers\nnormal_dist = TensorNormal(\n loc=torch.zeros(32, 4),\n scale=torch.ones(32, 4),\n shape=(32,),\n device='cpu'\n)\n\n# Sample and compute probabilities\nsamples = normal_dist.sample() # Shape: (32, 4)\nlog_probs = normal_dist.log_prob(samples)\nentropy = normal_dist.entropy()\n\n# Categorical distributions for discrete actions\ncategorical = TensorCategorical(\n logits=torch.randn(32, 6), # 6 possible actions\n shape=(32,),\n device='cpu'\n)\n```\n\n### PyTree Operations\n\n```python\n# All containers work seamlessly with PyTree operations\nimport torch.utils._pytree as pytree\n\n# Transform all tensors in the container\ndoubled_data = pytree.tree_map(lambda x: x * 2, data)\n\n# Combine multiple containers\ncombined = pytree.tree_map(lambda x, y: x + y, data1, data2)\n```\n\n## Features\n\n- **torch.compile Optimized**: Compatible with PyTorch's JIT compiler\n- **PyTree Support**: Integration with `torch.utils._pytree` for tree operations\n- **Zero-Copy Operations**: Efficient tensor sharing and manipulation\n- **Type Safety**: Static typing support with IDE autocomplete and type checking\n- **Batch Semantics**: Consistent batch/event dimension handling\n- **Shape Validation**: Automatic validation of tensor shapes and device consistency\n- **Multiple Container Types**: Different container types for different use cases\n- **Probabilistic Support**: Distribution containers for probabilistic modeling\n- **Comprehensive Testing**: Extensive test suite with compile compatibility verification\n- **Memory Efficient**: Optimized memory usage with slots-based dataclasses\n\n## API Overview\n\n### Core Components\n\n- **`TensorContainer`**: Base class providing core tensor manipulation operations with batch/event dimension semantics\n- **`TensorDict`**: Dictionary-like container for dynamic tensor collections with nested structure support\n- **`TensorDataClass`**: DataClass-based container for static, typed tensor structures\n- **`TensorDistribution`**: Distribution wrapper for probabilistic tensor operations\n\n### Key Concepts\n\n- **Batch Dimensions**: Leading dimensions defined by the `shape` parameter, consistent across all tensors\n- **Event Dimensions**: Trailing dimensions beyond batch shape, can vary per tensor\n- **PyTree Integration**: All containers are registered PyTree nodes for seamless tree operations\n- **Device Consistency**: Automatic validation ensures all tensors reside on compatible devices\n- **Unsafe Construction**: Context manager for performance-critical scenarios with validation bypass\n\n## torch.compile Compatibility\n\nTensor Container is designed for `torch.compile` compatibility:\n\n```python\n@torch.compile\ndef process_batch(data: TensorDict) -> TensorDict:\n # PyTree operations compile efficiently\n return TensorContainer._tree_map(lambda x: torch.relu(x), data)\n\n@torch.compile\ndef sample_and_score(dist: TensorNormal, actions: torch.Tensor) -> torch.Tensor:\n # Distribution operations are compile-safe\n return dist.log_prob(actions)\n\n# All operations compile efficiently with minimal graph breaks\ncompiled_result = process_batch(tensor_dict)\nlog_probs = sample_and_score(normal_dist, action_tensor)\n```\n\nThe testing framework includes compile compatibility verification to ensure operations work efficiently under JIT compilation, including:\n- Graph break detection and minimization\n- Recompilation tracking\n- Memory leak prevention\n- Performance benchmarking\n\n## Contributing\n\nContributions are welcome! Tensor Container is a learning project for exploring PyTorch internals and tensor container implementations.\n\n### Development Setup\n\n```bash\n# Clone and install in development mode\ngit clone https://github.com/mctigger/tensor-container.git\ncd tensor-container\npip install -e .[dev]\n```\n\n### Running Tests\n\n```bash\n# Run all tests with coverage\npytest --strict-markers --cov=src\n\n# Run specific test modules\npytest tests/tensor_dict/test_compile.py\npytest tests/tensor_dataclass/\npytest tests/tensor_distribution/\n\n# Run compile-specific tests\npytest tests/tensor_dict/test_graph_breaks.py\npytest tests/tensor_dict/test_recompilations.py\n```\n\n### Development Guidelines\n\n- All new features must maintain `torch.compile` compatibility\n- Comprehensive tests required, including compile compatibility verification\n- Follow existing code patterns and typing conventions\n- Distribution implementations must support KL divergence registration\n- Memory efficiency considerations for large-scale tensor operations\n- Unsafe construction patterns for performance-critical paths\n\n### Contribution Process\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Make your changes with appropriate tests\n4. Ensure all tests pass and maintain coverage\n5. Submit a pull request with a clear description\n\n## Documentation\n\nThe project includes documentation:\n\n- **[`docs/compatibility.md`](docs/compatibility.md)**: Python version compatibility guide and best practices\n- **[`docs/testing.md`](docs/testing.md)**: Testing philosophy, standards, and guidelines\n- **Source Code Documentation**: Extensive docstrings and type annotations throughout the codebase\n- **Test Coverage**: 643+ tests covering all major functionality with 86% code coverage\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Authors\n\n- **Tim Joseph** - [mctigger](https://github.com/mctigger)\n\n## Contact and Support\n\n- **Issues**: Report bugs and request features on [GitHub Issues](https://github.com/mctigger/tensor-container/issues)\n- **Discussions**: Join conversations on [GitHub Discussions](https://github.com/mctigger/tensor-container/discussions)\n- **Email**: For direct inquiries, contact [tim@mctigger.com](mailto:tim@mctigger.com)\n\n---\n\n*Tensor Container is an academic research project for learning PyTorch internals and tensor container patterns. For production applications, we strongly recommend using the official [torch/tensordict](https://github.com/pytorch/tensordict) library, which is actively maintained by the PyTorch team.*\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "TensorDict-like functionality for PyTorch with PyTree compatibility and torch.compile support",
"version": "0.6.1",
"project_urls": {
"Homepage": "https://github.com/mctigger/tensor-container"
},
"split_keywords": [
"deep learning",
" tensordict",
" pytorch"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "fc31474556085b95f1ea69fee62b23f7db1b6a068dc959051c508ee0f4fd1b2c",
"md5": "6b11d04099c0022f93a19e3e1a10e9e5",
"sha256": "7496fc63cfa4970c5d8b57346d11995b301c0cac3191b09673e87260ed4db767"
},
"downloads": -1,
"filename": "tensorcontainer-0.6.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6b11d04099c0022f93a19e3e1a10e9e5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 72900,
"upload_time": "2025-07-19T22:06:52",
"upload_time_iso_8601": "2025-07-19T22:06:52.467801Z",
"url": "https://files.pythonhosted.org/packages/fc/31/474556085b95f1ea69fee62b23f7db1b6a068dc959051c508ee0f4fd1b2c/tensorcontainer-0.6.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6faa3e8debd6e89edb48b3a0d459b3ba64bd9121ed661f432a4a44f6f94fbd3e",
"md5": "f31b8d7ee4c16860b1453ff79feeaad0",
"sha256": "6a831f1109e78556b2135bee1a6bd3732ba513a3a9406a01c35dfe90a5454fe1"
},
"downloads": -1,
"filename": "tensorcontainer-0.6.1.tar.gz",
"has_sig": false,
"md5_digest": "f31b8d7ee4c16860b1453ff79feeaad0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 51081,
"upload_time": "2025-07-19T22:06:54",
"upload_time_iso_8601": "2025-07-19T22:06:54.056394Z",
"url": "https://files.pythonhosted.org/packages/6f/aa/3e8debd6e89edb48b3a0d459b3ba64bd9121ed661f432a4a44f6f94fbd3e/tensorcontainer-0.6.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-19 22:06:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mctigger",
"github_project": "tensor-container",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "tensorcontainer"
}