minitensor


Nameminitensor JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://github.com/neuralsorcerer/minitensor
SummaryA lightweight, high-performance tensor operations library.
upload_time2025-10-18 09:26:16
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords deep-learning neural-networks tensor rust python machine-learning gpu cuda
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

![MiniTensor Logo](docs/_static/img/minitensor-small.png)

</div>
<h3 align="center">
A lightweight, high-performance tensor operations library with automatic differentiation, inspired by <a href="https://github.com/pytorch/pytorch">PyTorch</a> and powered by Rust engine.
</h3>

---

<div align="center">

[![Python 3.10+](https://img.shields.io/badge/Python-3.10+-fcbc2c.svg?logo=python&logoColor=white)](https://www.python.org/downloads/)
[![rustc 1.85+](https://img.shields.io/badge/rustc-1.85+-blue.svg?logo=rust&logoColor=white)](https://rust-lang.github.io/rfcs/2495-min-rust-version.html)
[![Test Linux](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_ubuntu.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_ubuntu.yml?query=branch%3Amain)
[![Test Windows](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_windows.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_windows.yml?query=branch%3Amain)
[![Test MacOS](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_macos.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_macos.yml?query=branch%3Amain)
[![Lints](https://github.com/neuralsorcerer/minitensor/actions/workflows/lints.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/lints.yml?query=branch%3Amain)
[![License](https://img.shields.io/badge/License-Apache%202.0-3c60b1.svg?logo=opensourceinitiative&logoColor=white)](./LICENSE)
[![DOI](https://zenodo.org/badge/1049200313.svg)](https://doi.org/10.5281/zenodo.17162776)

</div>

## Features

- **High Performance**: Rust backend for maximum speed and memory efficiency
- **Python-Friendly**: Familiar PyTorch-like API for easy adoption
- **Neural Networks**: Complete neural network layers and optimizers
- **NumPy Integration**: Seamless interoperability with NumPy arrays
- **Automatic Differentiation**: Built-in gradient computation for training
- **Extensible**: Modular design for easy customization and extension

## Quick Start

### Installation

**From PyPi:**

```bash
pip install minitensor
```

**From Source:**

```bash
# Clone the repository
git clone https://github.com/neuralsorcerer/minitensor.git
cd minitensor

# Quick install with make (Linux/macOS)
make install

# Or manually with maturin
pip install maturin[patchelf]
maturin develop --release

# Optional: editable install with pip (debug build by default)
pip install -e .
```

> _Note:_ `pip install -e .` builds a debug version by default; pass `--config-settings=--release` for a release build.

**Using the install script (Linux/macOS/Windows):**

```bash
bash install.sh
```

Common options:

```bash
bash install.sh --no-venv          # Use current Python env (no virtualenv)
bash install.sh --venv .myvenv     # Create/use a specific venv directory
bash install.sh --debug            # Debug build (default is --release)
bash install.sh --python /usr/bin/python3.12   # Use a specific Python
```

The script ensures Python 3.10+, sets up a virtual environment by default, installs Rust (via rustup if needed), installs maturin (with patchelf on Linux), builds MiniTensor, and verifies the installation.

### Basic Usage

```python
import minitensor as mt
from minitensor import nn, optim

# Create tensors
x = mt.randn(32, 784)  # Batch of 32 samples
y = mt.zeros(32, 10)   # Target labels

# Build a neural network
model = nn.Sequential([
    nn.DenseLayer(784, 128),
    nn.ReLU(),
    nn.DenseLayer(128, 10)
])

# Set up training
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(0.001, betas=(0.9, 0.999), epsilon=1e-8)

print(f"Model: {model}")
print(f"Input shape: {x.shape}")
```

## Documentation

### Core Components

#### Tensors

```python
import minitensor as mt
import numpy as np

# Create tensors
x = mt.zeros(3, 4)          # Zeros
y = mt.ones(3, 4)           # Ones
z = mt.randn(2, 2)          # Random normal
np_array = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float32)
w = mt.from_numpy(np_array) # From NumPy

# Operations
result = x + y                      # Element-wise addition
product = x.matmul(y.T)             # Matrix multiplication
mean_val = x.mean()                 # Reduction operations
max_val = x.max()                   # -inf for empty or all-NaN tensors
min_vals, min_idx = x.min(dim=1)    # Returns values & indices; empty dims yield (inf, 0)
```

#### Neural Networks

```python
from minitensor import nn

# Layers
dense = nn.DenseLayer(10, 5)        # Dense layer (fully connected)
conv = nn.Conv2d(3, 16, 3)          # 2D convolution
bn = nn.BatchNorm1d(128)            # Batch normalization
dropout = nn.Dropout(0.5)           # Dropout regularization

# Activations
relu = nn.ReLU()                    # ReLU activation
sigmoid = nn.Sigmoid()              # Sigmoid activation
tanh = nn.Tanh()                    # Tanh activation
gelu = nn.GELU()                    # GELU activation

# Loss functions
mse = nn.MSELoss()                  # Mean squared error
ce = nn.CrossEntropyLoss()          # Cross entropy
bce = nn.BCELoss()                  # Binary cross entropy
```

#### Optimizers

```python
from minitensor import optim

# Optimizers
sgd = optim.SGD(0.01, 0.9, 0.0, False)                      # SGD with momentum
adam = optim.Adam(0.001, betas=(0.9, 0.999), epsilon=1e-8)  # Adam optimizer
rmsprop = optim.RMSprop(0.01, 0.99, 1e-8, 0.0, 0.0)         # RMSprop optimizer
```

## Architecture

Minitensor is built with a modular architecture:

```text
┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   Python API    │    │   PyO3 Bindings  │    │   Rust Engine   │
│                 │<-->│                  │<-->│                 │
│ • Tensor        │    │ • Type Safety    │    │ • Performance   │
│ • nn.Module     │    │ • Memory Mgmt    │    │ • Autograd      │
│ • Optimizers    │    │ • Error Handling │    │ • SIMD/GPU      │
└─────────────────┘    └──────────────────┘    └─────────────────┘
```

### Components

- **Engine**: High-performance Rust backend with SIMD optimizations
- **Bindings**: PyO3-based Python bindings for seamless interop
- **Python API**: Familiar PyTorch-like interface for ease of use

## Examples

### Simple Neural Network

```python
import minitensor as mt
from minitensor import nn, optim

# Create a simple classifier
model = nn.Sequential([
    nn.DenseLayer(784, 128),
    nn.ReLU(),
    nn.DenseLayer(128, 10),
])

# Initialize model
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(0.001, betas=(0.9, 0.999), epsilon=1e-8)
```

### Training Loop

```python
import minitensor as mt
from minitensor import nn, optim

# Synthetic data: y = 3x + 0.5
x = mt.randn(64, 1)
y = 3 * x + 0.5

# Model, loss, optimizer
model = nn.DenseLayer(1, 1)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

for epoch in range(100):
    pred = model(x)
    loss = criterion(pred, y)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    if (epoch + 1) % 20 == 0:
        loss_val = float(loss.numpy().ravel()[0])
        print(f"Epoch {epoch+1:03d} | Loss: {loss_val:.4f}")
```

## Development & Testing

The Python package is a thin wrapper around the compiled Rust engine. Whenever
you make changes under `engine/` or `bindings/`, rebuild the extension so the
Python layer talks to the latest native implementation before running tests:

```bash
# 1. Build the extension in editable mode (release profile)
pip install -e . --config-settings=--release

# 2. Run the Rust unit and integration tests
cargo test

# 3. Execute the Python test
pytest
```

Running `pip install -e .` refreshes the `minitensor._core` module. Skipping this
step leaves Python bound to an older shared library, which can surface as missing
attributes or stale logic when validating changes locally.

### Code Style

- **Rust**: Follow `rustfmt` and `clippy` recommendations
- **Python**: Use `black` and `isort` for formatting

## Performance

Minitensor is designed for performance:

- **Memory Efficient**: Zero-copy operations where possible
- **SIMD Optimized**: Vectorized operations for maximum throughput
- **GPU Ready**: CUDA and Metal backend support (coming soon)
- **Parallel**: Multi-threaded operations for large tensors

## Citation

If you use minitensor in your work and wish to refer to it, please use the following BibTeX entry.

```bibtex
@software{minitensor2025,
  author = {Soumyadip Sarkar},
  title = {MiniTensor: A Lightweight, High-Performance Tensor Operations Library},
  url = {http://github.com/neuralsorcerer/minitensor},
  year = {2025},
}
```

## License

This project is licensed under the Apache License - see the [LICENSE](LICENSE) file for details.

## Acknowledgments

- Inspired by [PyTorch's design and API](https://pytorch.org)
- Built with [Rust's](https://www.rust-lang.org) performance and safety
- Powered by [PyO3](https://github.com/PyO3/pyo3) for Python integration


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/neuralsorcerer/minitensor",
    "name": "minitensor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "deep-learning, neural-networks, tensor, rust, python, machine-learning, gpu, cuda",
    "author": null,
    "author_email": "Soumyadip Sarkar <soumyadip@soumyadipsarkar.com>",
    "download_url": "https://files.pythonhosted.org/packages/db/74/8a713740677492d2605521fe7349e9665f2269856f51608fdc5cb870b436/minitensor-0.1.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\r\n\r\n![MiniTensor Logo](docs/_static/img/minitensor-small.png)\r\n\r\n</div>\r\n<h3 align=\"center\">\r\nA lightweight, high-performance tensor operations library with automatic differentiation, inspired by <a href=\"https://github.com/pytorch/pytorch\">PyTorch</a> and powered by Rust engine.\r\n</h3>\r\n\r\n---\r\n\r\n<div align=\"center\">\r\n\r\n[![Python 3.10+](https://img.shields.io/badge/Python-3.10+-fcbc2c.svg?logo=python&logoColor=white)](https://www.python.org/downloads/)\r\n[![rustc 1.85+](https://img.shields.io/badge/rustc-1.85+-blue.svg?logo=rust&logoColor=white)](https://rust-lang.github.io/rfcs/2495-min-rust-version.html)\r\n[![Test Linux](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_ubuntu.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_ubuntu.yml?query=branch%3Amain)\r\n[![Test Windows](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_windows.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_windows.yml?query=branch%3Amain)\r\n[![Test MacOS](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_macos.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/test_macos.yml?query=branch%3Amain)\r\n[![Lints](https://github.com/neuralsorcerer/minitensor/actions/workflows/lints.yml/badge.svg)](https://github.com/neuralsorcerer/minitensor/actions/workflows/lints.yml?query=branch%3Amain)\r\n[![License](https://img.shields.io/badge/License-Apache%202.0-3c60b1.svg?logo=opensourceinitiative&logoColor=white)](./LICENSE)\r\n[![DOI](https://zenodo.org/badge/1049200313.svg)](https://doi.org/10.5281/zenodo.17162776)\r\n\r\n</div>\r\n\r\n## Features\r\n\r\n- **High Performance**: Rust backend for maximum speed and memory efficiency\r\n- **Python-Friendly**: Familiar PyTorch-like API for easy adoption\r\n- **Neural Networks**: Complete neural network layers and optimizers\r\n- **NumPy Integration**: Seamless interoperability with NumPy arrays\r\n- **Automatic Differentiation**: Built-in gradient computation for training\r\n- **Extensible**: Modular design for easy customization and extension\r\n\r\n## Quick Start\r\n\r\n### Installation\r\n\r\n**From PyPi:**\r\n\r\n```bash\r\npip install minitensor\r\n```\r\n\r\n**From Source:**\r\n\r\n```bash\r\n# Clone the repository\r\ngit clone https://github.com/neuralsorcerer/minitensor.git\r\ncd minitensor\r\n\r\n# Quick install with make (Linux/macOS)\r\nmake install\r\n\r\n# Or manually with maturin\r\npip install maturin[patchelf]\r\nmaturin develop --release\r\n\r\n# Optional: editable install with pip (debug build by default)\r\npip install -e .\r\n```\r\n\r\n> _Note:_ `pip install -e .` builds a debug version by default; pass `--config-settings=--release` for a release build.\r\n\r\n**Using the install script (Linux/macOS/Windows):**\r\n\r\n```bash\r\nbash install.sh\r\n```\r\n\r\nCommon options:\r\n\r\n```bash\r\nbash install.sh --no-venv          # Use current Python env (no virtualenv)\r\nbash install.sh --venv .myvenv     # Create/use a specific venv directory\r\nbash install.sh --debug            # Debug build (default is --release)\r\nbash install.sh --python /usr/bin/python3.12   # Use a specific Python\r\n```\r\n\r\nThe script ensures Python 3.10+, sets up a virtual environment by default, installs Rust (via rustup if needed), installs maturin (with patchelf on Linux), builds MiniTensor, and verifies the installation.\r\n\r\n### Basic Usage\r\n\r\n```python\r\nimport minitensor as mt\r\nfrom minitensor import nn, optim\r\n\r\n# Create tensors\r\nx = mt.randn(32, 784)  # Batch of 32 samples\r\ny = mt.zeros(32, 10)   # Target labels\r\n\r\n# Build a neural network\r\nmodel = nn.Sequential([\r\n    nn.DenseLayer(784, 128),\r\n    nn.ReLU(),\r\n    nn.DenseLayer(128, 10)\r\n])\r\n\r\n# Set up training\r\ncriterion = nn.CrossEntropyLoss()\r\noptimizer = optim.Adam(0.001, betas=(0.9, 0.999), epsilon=1e-8)\r\n\r\nprint(f\"Model: {model}\")\r\nprint(f\"Input shape: {x.shape}\")\r\n```\r\n\r\n## Documentation\r\n\r\n### Core Components\r\n\r\n#### Tensors\r\n\r\n```python\r\nimport minitensor as mt\r\nimport numpy as np\r\n\r\n# Create tensors\r\nx = mt.zeros(3, 4)          # Zeros\r\ny = mt.ones(3, 4)           # Ones\r\nz = mt.randn(2, 2)          # Random normal\r\nnp_array = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float32)\r\nw = mt.from_numpy(np_array) # From NumPy\r\n\r\n# Operations\r\nresult = x + y                      # Element-wise addition\r\nproduct = x.matmul(y.T)             # Matrix multiplication\r\nmean_val = x.mean()                 # Reduction operations\r\nmax_val = x.max()                   # -inf for empty or all-NaN tensors\r\nmin_vals, min_idx = x.min(dim=1)    # Returns values & indices; empty dims yield (inf, 0)\r\n```\r\n\r\n#### Neural Networks\r\n\r\n```python\r\nfrom minitensor import nn\r\n\r\n# Layers\r\ndense = nn.DenseLayer(10, 5)        # Dense layer (fully connected)\r\nconv = nn.Conv2d(3, 16, 3)          # 2D convolution\r\nbn = nn.BatchNorm1d(128)            # Batch normalization\r\ndropout = nn.Dropout(0.5)           # Dropout regularization\r\n\r\n# Activations\r\nrelu = nn.ReLU()                    # ReLU activation\r\nsigmoid = nn.Sigmoid()              # Sigmoid activation\r\ntanh = nn.Tanh()                    # Tanh activation\r\ngelu = nn.GELU()                    # GELU activation\r\n\r\n# Loss functions\r\nmse = nn.MSELoss()                  # Mean squared error\r\nce = nn.CrossEntropyLoss()          # Cross entropy\r\nbce = nn.BCELoss()                  # Binary cross entropy\r\n```\r\n\r\n#### Optimizers\r\n\r\n```python\r\nfrom minitensor import optim\r\n\r\n# Optimizers\r\nsgd = optim.SGD(0.01, 0.9, 0.0, False)                      # SGD with momentum\r\nadam = optim.Adam(0.001, betas=(0.9, 0.999), epsilon=1e-8)  # Adam optimizer\r\nrmsprop = optim.RMSprop(0.01, 0.99, 1e-8, 0.0, 0.0)         # RMSprop optimizer\r\n```\r\n\r\n## Architecture\r\n\r\nMinitensor is built with a modular architecture:\r\n\r\n```text\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502   Python API    \u2502    \u2502   PyO3 Bindings  \u2502    \u2502   Rust Engine   \u2502\r\n\u2502                 \u2502<-->\u2502                  \u2502<-->\u2502                 \u2502\r\n\u2502 \u2022 Tensor        \u2502    \u2502 \u2022 Type Safety    \u2502    \u2502 \u2022 Performance   \u2502\r\n\u2502 \u2022 nn.Module     \u2502    \u2502 \u2022 Memory Mgmt    \u2502    \u2502 \u2022 Autograd      \u2502\r\n\u2502 \u2022 Optimizers    \u2502    \u2502 \u2022 Error Handling \u2502    \u2502 \u2022 SIMD/GPU      \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n```\r\n\r\n### Components\r\n\r\n- **Engine**: High-performance Rust backend with SIMD optimizations\r\n- **Bindings**: PyO3-based Python bindings for seamless interop\r\n- **Python API**: Familiar PyTorch-like interface for ease of use\r\n\r\n## Examples\r\n\r\n### Simple Neural Network\r\n\r\n```python\r\nimport minitensor as mt\r\nfrom minitensor import nn, optim\r\n\r\n# Create a simple classifier\r\nmodel = nn.Sequential([\r\n    nn.DenseLayer(784, 128),\r\n    nn.ReLU(),\r\n    nn.DenseLayer(128, 10),\r\n])\r\n\r\n# Initialize model\r\ncriterion = nn.CrossEntropyLoss()\r\noptimizer = optim.Adam(0.001, betas=(0.9, 0.999), epsilon=1e-8)\r\n```\r\n\r\n### Training Loop\r\n\r\n```python\r\nimport minitensor as mt\r\nfrom minitensor import nn, optim\r\n\r\n# Synthetic data: y = 3x + 0.5\r\nx = mt.randn(64, 1)\r\ny = 3 * x + 0.5\r\n\r\n# Model, loss, optimizer\r\nmodel = nn.DenseLayer(1, 1)\r\ncriterion = nn.MSELoss()\r\noptimizer = optim.SGD(model.parameters(), lr=0.1)\r\n\r\nfor epoch in range(100):\r\n    pred = model(x)\r\n    loss = criterion(pred, y)\r\n    optimizer.zero_grad()\r\n    loss.backward()\r\n    optimizer.step()\r\n    if (epoch + 1) % 20 == 0:\r\n        loss_val = float(loss.numpy().ravel()[0])\r\n        print(f\"Epoch {epoch+1:03d} | Loss: {loss_val:.4f}\")\r\n```\r\n\r\n## Development & Testing\r\n\r\nThe Python package is a thin wrapper around the compiled Rust engine. Whenever\r\nyou make changes under `engine/` or `bindings/`, rebuild the extension so the\r\nPython layer talks to the latest native implementation before running tests:\r\n\r\n```bash\r\n# 1. Build the extension in editable mode (release profile)\r\npip install -e . --config-settings=--release\r\n\r\n# 2. Run the Rust unit and integration tests\r\ncargo test\r\n\r\n# 3. Execute the Python test\r\npytest\r\n```\r\n\r\nRunning `pip install -e .` refreshes the `minitensor._core` module. Skipping this\r\nstep leaves Python bound to an older shared library, which can surface as missing\r\nattributes or stale logic when validating changes locally.\r\n\r\n### Code Style\r\n\r\n- **Rust**: Follow `rustfmt` and `clippy` recommendations\r\n- **Python**: Use `black` and `isort` for formatting\r\n\r\n## Performance\r\n\r\nMinitensor is designed for performance:\r\n\r\n- **Memory Efficient**: Zero-copy operations where possible\r\n- **SIMD Optimized**: Vectorized operations for maximum throughput\r\n- **GPU Ready**: CUDA and Metal backend support (coming soon)\r\n- **Parallel**: Multi-threaded operations for large tensors\r\n\r\n## Citation\r\n\r\nIf you use minitensor in your work and wish to refer to it, please use the following BibTeX entry.\r\n\r\n```bibtex\r\n@software{minitensor2025,\r\n  author = {Soumyadip Sarkar},\r\n  title = {MiniTensor: A Lightweight, High-Performance Tensor Operations Library},\r\n  url = {http://github.com/neuralsorcerer/minitensor},\r\n  year = {2025},\r\n}\r\n```\r\n\r\n## License\r\n\r\nThis project is licensed under the Apache License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## Acknowledgments\r\n\r\n- Inspired by [PyTorch's design and API](https://pytorch.org)\r\n- Built with [Rust's](https://www.rust-lang.org) performance and safety\r\n- Powered by [PyO3](https://github.com/PyO3/pyo3) for Python integration\r\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A lightweight, high-performance tensor operations library.",
    "version": "0.1.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/neuralsorcerer/minitensor/issues",
        "Homepage": "https://github.com/neuralsorcerer/minitensor",
        "Repository": "https://github.com/neuralsorcerer/minitensor"
    },
    "split_keywords": [
        "deep-learning",
        " neural-networks",
        " tensor",
        " rust",
        " python",
        " machine-learning",
        " gpu",
        " cuda"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "87c4825ba1656ca13bb8e53eab32ee3bed888b9340f943fb355db4b9afbe416c",
                "md5": "7618d9cc89151a6c9847acca6944b055",
                "sha256": "893f91eb20cb2397c822a9f3a8809ecad767475c0da902184dcaa1e9a3cee626"
            },
            "downloads": -1,
            "filename": "minitensor-0.1.2-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "7618d9cc89151a6c9847acca6944b055",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.10",
            "size": 2951563,
            "upload_time": "2025-10-18T09:26:14",
            "upload_time_iso_8601": "2025-10-18T09:26:14.580370Z",
            "url": "https://files.pythonhosted.org/packages/87/c4/825ba1656ca13bb8e53eab32ee3bed888b9340f943fb355db4b9afbe416c/minitensor-0.1.2-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "db748a713740677492d2605521fe7349e9665f2269856f51608fdc5cb870b436",
                "md5": "ce94b1ef2b87b2b20177b633fb20a6f7",
                "sha256": "a919698b7f63b7c0b136adcd4f8c797f8a8d0ac09181aea7ad4991f926185b97"
            },
            "downloads": -1,
            "filename": "minitensor-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "ce94b1ef2b87b2b20177b633fb20a6f7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 285558,
            "upload_time": "2025-10-18T09:26:16",
            "upload_time_iso_8601": "2025-10-18T09:26:16.030665Z",
            "url": "https://files.pythonhosted.org/packages/db/74/8a713740677492d2605521fe7349e9665f2269856f51608fdc5cb870b436/minitensor-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-18 09:26:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neuralsorcerer",
    "github_project": "minitensor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "minitensor"
}
        
Elapsed time: 0.60551s