zetascale


Namezetascale JSON
Version 2.7.3 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/zeta
SummaryRapidly Build, Optimize, and Train SOTA AI Models
upload_time2024-09-09 17:48:22
maintainerNone
docs_urlNone
authorZeta Team
requires_python<4.0,>=3.10
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering swarms agents llms transformers multi-agent swarms of agents enterprise-grade agents production-grade agents agents multi-grade-agents swarms transformers llms prompt engineering agents generative agents generative ai agent marketplace agent store lstms grus rnns cnns mlps dnns
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](images/agorabanner.png)](https://discord.gg/qUtxnK2NMf)

![Zeta banner](images/zeta.png)
Build SOTA AI Models 80% faster with modular, high-performance, and scalable building blocks!

[![Docs](https://readthedocs.org/projects/zeta/badge/)](https://zeta.readthedocs.io)

<p>
  <a href="https://github.com/kyegomez/zeta/blob/main/LICENSE"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-blue.svg" /></a>
  <a href="https://pypi.org/project/zetascale"><img alt="MIT License" src="https://badge.fury.io/py/zetascale.svg" /></a>
</p>

[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)

[![GitHub issues](https://img.shields.io/github/issues/kyegomez/zeta)](https://github.com/kyegomez/zeta/issues) [![GitHub forks](https://img.shields.io/github/forks/kyegomez/zeta)](https://github.com/kyegomez/zeta/network) [![GitHub stars](https://img.shields.io/github/stars/kyegomez/zeta)](https://github.com/kyegomez/zeta/stargazers) [![GitHub license](https://img.shields.io/github/license/kyegomez/zeta)](https://github.com/kyegomez/zeta/blob/main/LICENSE)[![GitHub star chart](https://img.shields.io/github/stars/kyegomez/zeta?style=social)](https://star-history.com/#kyegomez/zeta)[![Dependency Status](https://img.shields.io/librariesio/github/kyegomez/zeta)](https://libraries.io/github/kyegomez/zeta) [![Downloads](https://static.pepy.tech/badge/zeta/month)](https://pepy.tech/project/zetascale)

[![Join the Agora discord](https://img.shields.io/discord/1110910277110743103?label=Discord&logo=discord&logoColor=white&style=plastic&color=d7b023)![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/zeta)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta) [![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta) [![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&title=&summary=&source=)

[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&title=zeta%20-%20the%20future%20of%20AI) [![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&t=zeta%20-%20the%20future%20of%20AI) [![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=zeta%20-%20the%20future%20of%20AI) [![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20zeta%20-%20the%20future%20of%20AI%20%23zeta%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta)

After building out thousands of neural nets and facing the same annoying bottlenecks of chaotic codebases with no modularity and low performance modules, Zeta needed to be born to enable me and others to quickly prototype, train, and optimize the latest SOTA neural nets and deploy them into production. 

Zeta places a radical emphasis on useability, modularity, and performance. Zeta is now currently employed in 100s of models across my github and across others. 
Get started below and LMK if you want my help building any model, I'm here for you 😊 💜 


# Install

```bash
$ pip3 install -U zetascale
```

# Usage

## Starting Your Journey

Creating a model empowered with the aforementioned breakthrough research features is a breeze. Here's how to quickly materialize the renowned Multi Query Attention

```python
import torch
from zeta import MultiQueryAttention

# Model
model = MultiQueryAttention(
    dim=512,
    heads=8,
)

# Input
text = torch.randn(2, 4, 512)

# Output
output, _, _ = model(text)
print(output.shape)
print(output)

```



### `SwiGLU`
The SwiGLU activation function takes an input tensor and applies a gating mechanism to selectively pass information. It consists of two parts: the "switch" gate and the "glu" gate. The switch gate controls the flow of information, while the glu gate performs a non-linear transformation on the input.


```python
import torch

from zeta.nn import SwiGLUStacked

x = torch.randn(5, 10)
swiglu = SwiGLUStacked(10, 20)
swiglu(x).shape
```

In this example, we first import the necessary modules, including torch for tensor operations and SwiGLUStacked from zeta.nn for the SwiGLU activation function.

We then create a random input tensor x with a shape of (5, 10). Next, we instantiate an instance of SwiGLUStacked with an input size of 10 and an output size of 20.

Finally, we pass the input tensor x to the swiglu module, which applies the SwiGLU activation function to it. The resulting output tensor is stored in the output variable. We print the shape of the output tensor to see the

-------

### RelativePositionBias
- `RelativePositionBias` quantizes the distance between two positions into a certain number of buckets and then uses an embedding to get the relative position bias. This mechanism aids in the attention mechanism by providing biases based on relative positions between the query and key, rather than relying solely on their absolute positions.

```python
import torch
from torch import nn

from zeta.nn import RelativePositionBias

# Initialize the RelativePositionBias module
rel_pos_bias = RelativePositionBias()

# Example 1: Compute bias for a single batch
bias_matrix = rel_pos_bias(1, 10, 10)


# Example 2: Utilize in conjunction with an attention mechanism
# NOTE: This is a mock example, and may not represent an actual attention mechanism's complete implementation.
class MockAttention(nn.Module):
    def __init__(self):
        super().__init__()
        self.rel_pos_bias = RelativePositionBias()

    def forward(self, queries, keys):
        bias = self.rel_pos_bias(queries.size(0), queries.size(1), keys.size(1))
        # Further computations with bias in the attention mechanism...
        return None  # Placeholder


# Example 3: Modify default configurations
custom_rel_pos_bias = RelativePositionBias(
    bidirectional=False, num_buckets=64, max_distance=256, num_heads=8
)
```

### `FeedForward`
The FeedForward module performs a feedforward operation on the input tensor x. It consists of a multi-layer perceptron (MLP) with an optional activation function and LayerNorm. 
Used in most language, multi-modal, and modern neural networks.

```python
import torch

from zeta.nn import FeedForward

model = FeedForward(256, 512, glu=True, post_act_ln=True, dropout=0.2)

x = torch.randn(1, 256)

output = model(x)
print(output.shape)
```

### `BitLinear`
- The BitLinear module performs linear transformation on the input data, followed by quantization and dequantization. The quantization process is performed using the absmax_quantize function, which quantizes the input tensor based on the absolute maximum value, [from the paper](https://arxiv.org/abs/2310.11453)
```python
import torch
from torch import nn

import zeta.quant as qt


class MyModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = qt.BitLinear(10, 20)

    def forward(self, x):
        return self.linear(x)


# Initialize the model
model = MyModel()

# Create a random tensor of size (128, 10)
input = torch.randn(128, 10)

# Perform the forward pass
output = model(input)

# Print the size of the output
print(output.size())  # torch.Size([128, 20])
```

### `PalmE`
- This is an implementation of the multi-modal Palm-E model using a decoder llm as the backbone with an VIT image encoder to process vision, it's very similiar to GPT4, Kosmos, RTX2, and many other multi-modality model architectures

```python
import torch

from zeta.structs import (
    AutoRegressiveWrapper,
    Decoder,
    Encoder,
    Transformer,
    ViTransformerWrapper,
)


class PalmE(torch.nn.Module):
    """
        PalmE is a transformer architecture that uses a ViT encoder and a transformer decoder.

        Args:

            image_size (int): Size of the image.
            patch_size (int): Size of the patch.
            encoder_dim (int): Dimension of the encoder.
            encoder_depth (int): Depth of the encoder.
            encoder_heads (int): Number of heads in the encoder.
            num_tokens (int): Number of tokens.
            max_seq_len (int): Maximum sequence length.
            decoder_dim (int): Dimension of the decoder.
            decoder_depth (int): Depth of the decoder.
            decoder_heads (int): Number of heads in the decoder.
            alibi_num_heads (int): Number of heads in the alibi attention.
            attn_kv_heads (int): Number of heads in the attention key-value projection.
            use_abs_pos_emb (bool): Whether to use absolute positional embeddings.
            cross_attend (bool): Whether to cross attend in the decoder.
            alibi_pos_bias (bool): Whether to use positional bias in the alibi attention.
            rotary_xpos (bool): Whether to use rotary positional embeddings.
            attn_flash (bool): Whether to use attention flash.
            qk_norm (bool): Whether to normalize the query and key in the attention layer.

        Returns:

                torch.Tensor: The output of the model.

        Usage:

    img = torch.randn(1, 3, 256, 256)
    text = torch.randint(0, 20000, (1, 1024))
    model = PalmE()
    output = model(img, text)
    print(output)

    """

    def __init__(
        self,
        image_size=256,
        patch_size=32,
        encoder_dim=512,
        encoder_depth=6,
        encoder_heads=8,
        num_tokens=20000,
        max_seq_len=1024,
        decoder_dim=512,
        decoder_depth=6,
        decoder_heads=8,
        alibi_num_heads=4,
        attn_kv_heads=2,
        use_abs_pos_emb=False,
        cross_attend=True,
        alibi_pos_bias=True,
        rotary_xpos=True,
        attn_flash=True,
        qk_norm=True,
    ):
        super().__init__()

        # vit architecture
        self.encoder = ViTransformerWrapper(
            image_size=image_size,
            patch_size=patch_size,
            attn_layers=Encoder(
                dim=encoder_dim, depth=encoder_depth, heads=encoder_heads
            ),
        )

        # palm model architecture
        self.decoder = Transformer(
            num_tokens=num_tokens,
            max_seq_len=max_seq_len,
            use_abs_pos_emb=use_abs_pos_emb,
            attn_layers=Decoder(
                dim=decoder_dim,
                depth=decoder_depth,
                heads=decoder_heads,
                cross_attend=cross_attend,
                alibi_pos_bias=alibi_pos_bias,
                alibi_num_heads=alibi_num_heads,
                rotary_xpos=rotary_xpos,
                attn_kv_heads=attn_kv_heads,
                attn_flash=attn_flash,
                qk_norm=qk_norm,
            ),
        )

        # autoregressive wrapper to enable generation of tokens
        self.decoder = AutoRegressiveWrapper(self.decoder)

    def forward(self, img: torch.Tensor, text: torch.Tensor):
        """Forward pass of the model."""
        try:
            encoded = self.encoder(img, return_embeddings=True)
            return self.decoder(text, context=encoded)
        except Exception as error:
            print(f"Failed in forward method: {error}")
            raise


# Usage with random inputs
img = torch.randn(1, 3, 256, 256)
text = torch.randint(0, 20000, (1, 1024))

# Initiliaze the model
model = PalmE()
output = model(img, text)
print(output)
```


### `Unet`
Unet is a famous convolutional neural network architecture originally used for biomedical image segmentation but soon became the backbone of the generative AI Mega-revolution. The architecture comprises two primary pathways: downsampling and upsampling, followed by an output convolution. Due to its U-shape, the architecture is named U-Net. Its symmetric architecture ensures that the context (from downsampling) and the localization (from upsampling) are captured effectively.

```python
import torch

from zeta.nn import Unet

# Initialize the U-Net model
model = Unet(n_channels=1, n_classes=2)

# Random input tensor with dimensions [batch_size, channels, height, width]
x = torch.randn(1, 1, 572, 572)

# Forward pass through the model
y = model(x)

# Output
print(f"Input shape: {x.shape}")
print(f"Output shape: {y.shape}")
```


### `VisionEmbeddings`
The VisionEmbedding class is designed for converting images into patch embeddings, making them suitable for processing by transformer-based models. This class plays a crucial role in various computer vision tasks and enables the integration of vision data into transformer architectures!

```python
import torch

from zeta.nn import VisionEmbedding

# Create an instance of VisionEmbedding
vision_embedding = VisionEmbedding(
    img_size=224,
    patch_size=16,
    in_chans=3,
    embed_dim=768,
    contain_mask_token=True,
    prepend_cls_token=True,
)

# Load an example image (3 channels, 224x224)
input_image = torch.rand(1, 3, 224, 224)

# Perform image-to-patch embedding
output = vision_embedding(input_image)

# The output now contains patch embeddings, ready for input to a transformer model
```


### `niva`
- Niva focuses on weights of certain layers (specified by quantize_layers). Ideal for models where runtime activation is variable. 👁️ Example Layers: nn.Embedding, nn.LSTM. 

```python
import torch

from zeta import niva

# Load a pre-trained model
model = YourModelClass()

# Quantize the model dynamically, specifying layers to quantize
niva(
    model=model,
    model_path="path_to_pretrainedim_weights.pt",
    output_path="quantizedim.pt",
    quant_type="dynamic",
    quantize_layers=[nn.Linear, nn.Conv2d],
    dtype=torch.qint8,
)
```


### `FusedDenseGELUDense`
- Increase model speed by 2x with this module that fuses together 2 hyper-optimized dense ops from bits and bytes and a gelu together!

```python
import torch

from zeta.nn import FusedDenseGELUDense

x = torch.randn(1, 512)
model = FusedDenseGELUDense(512, 1024)
out = model(x)
out.shape
```


### `FusedDropoutLayerNorm`
- FusedDropoutLayerNorm is a fused kernel of dropout and layernorm to speed up FFNs or MLPS by 2X

```python
import torch
from torch import nn

from zeta.nn import FusedDropoutLayerNorm

# Initialize the module
model = FusedDropoutLayerNorm(dim=512)

# Create a sample input tensor
x = torch.randn(1, 512)

# Forward pass
output = model(x)

# Check output shape
print(output.shape)  # Expected: torch.Size([1, 512])
```


### `Mamba`
- Pytorch implementation of the new SSM model architecture Mamba

```python
import torch

from zeta.nn import MambaBlock

# Initialize Mamba
block = MambaBlock(dim=64, depth=1)

# Random input
x = torch.randn(1, 10, 64)

# Apply the model to the block
y = block(x)

print(y.shape)
# torch.Size([1, 10, 64])
```

### `FiLM`

```python
import torch

from zeta.nn import Film

# Initialize the Film layer
film_layer = Film(dim=128, hidden_dim=64, expanse_ratio=4)

# Create some dummy data for conditions and hiddens
conditions = torch.randn(10, 128)  # Batch size is 10, feature size is 128
hiddens = torch.randn(
    10, 1, 128
)  # Batch size is 10, sequence length is 1, feature size is 128

# Pass the data through the Film layer
modulated_features = film_layer(conditions, hiddens)

# Print the shape of the output
print(modulated_features.shape)  # Should be [10, 1, 128]
```

### `hyper_optimize`
- A single wrapper for torch.fx, torch.script, torch.compile, dynamic quantization, mixed precision through torch.amp, with execution time metrics all in once place!
```python
import torch

from zeta.nn import hyper_optimize


@hyper_optimize(
    torch_fx=False,
    torch_script=False,
    torch_compile=True,
    quantize=True,
    mixed_precision=True,
    enable_metrics=True,
)
def model(x):
    return x @ x


out = model(torch.randn(1, 3, 32, 32))
print(out)
```


### DPO - Direct Policy Optimization
Direct Policy Optimization employed for many RLHF applications for LLMs.

```python
import torch
from torch import nn

from zeta.rl import DPO


# Define a simple policy model
class PolicyModel(nn.Module):
    def __init__(self, dim, output_dim):
        super().__init__()
        self.fc = nn.Linear(dim, output_dim)

    def forward(self, x):
        return self.fc(x)


dim = 10
output_dim = 5
policy_model = PolicyModel(dim, output_dim)

# Initialize DPO with the policy model
dpo_model = DPO(model=policy_model, beta=0.1)

# Sample preferred and unpreferred sequences
preferred_seq = torch.randint(0, output_dim, (3, dim))
unpreferred_seq = torch.randint(0, output_dim, (3, dim))

# Compute loss
loss = dpo_model(preferred_seq, unpreferred_seq)
print(loss)
```


## PyTorch Model Logging
- A decorator that logs the execution of the pytorch model, including parameters, gradients, and memory usage.

```python
from zeta.utils import verbose_execution

import torch
from torch import nn
from zeta.utils.verbose_execution import verbose_execution

# # Configure Loguru (optional)
@verbose_execution(log_params=True, log_gradients=True, log_memory=True)
class YourPyTorchModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 64, 3)
        self.relu = nn.ReLU()
        self.flatten = nn.Flatten()
        self.fc = nn.Linear(64 * 222 * 222, 10)  # Adjusted input size

    def forward(self, x):
        x = self.conv1(x)
        x = self.relu(x)
        x = self.flatten(x)
        x = self.fc(x)
        return x

# Create and use your model
model = YourPyTorchModel()
input_tensor = torch.randn(1, 3, 224, 224)
output = model(input_tensor)

# If you want to see gradient information, you need to perform a backward pass
loss = output.sum()
loss.backward()
```


## Sigmoid Attention

Attention 18% faster with sigmoid instead of attention

- replace traditional softmax in attention with a sigmoid and 
- a constant (not learned) scalar bias based on the sequence length.


```python
import torch
from zeta import SigmoidAttention
from loguru import logger

batch_size = 32
seq_len = 128
dim = 512
heads = 8

x = torch.rand(batch_size, seq_len, dim)
mask = torch.ones(batch_size, seq_len, seq_len)  # Example mask

sigmoid_attn = SigmoidAttention(dim, heads, seq_len)
output = sigmoid_attn(x, mask)
print(output.shape)
```



# Documentation
All classes must have documentation if you see a class or function without documentation then please report it to me at kye@apac.ai,

Documentation is at [zeta.apac.ai](https://zeta.apac.ai/)


-------


# Running tests
You should install the pre-commit hooks with pre-commit install. This will run the linter, mypy, and a subset of the tests on every commit.

For more examples on how to run the full test suite please refer to the CI workflow.

Some examples of running tests locally:

```bash
python3 -m pip install -e '.[testing]'  # install extra deps for testing
python3 -m pytest tests/                 # whole test suite
```
----

## Community

Join our growing community around the world, for real-time support, ideas, and discussions on how to build better models 😊 

- View our official [Docs](https://zeta.apac.ai)
- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)
- Follow us on [Twitter](https://twitter.com/kyegomez)
- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)
- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)

---

# 🤝 Schedule a 1-on-1 Session
Want to train a custom AI model for a real-world task like General Multi-Modal Models, Facial Recognitions, Drug Discovery, Humanoid Robotics? I'll help you create the model architecture then train the model and then optimize it to meet your quality assurance standards.

Book a [1-on-1 Session with Kye here.](https://calendly.com/apacai/agora), the Creator, to discuss any issues, provide feedback, or explore how we can improve Zeta for you or help you build your own custom models!

## 🫶 Contributions:

The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/kyegomez/zeta/issues/new/choose) | Feature Request? [File here](https://github.com/kyegomez/zeta/issues/new/choose)

Zeta is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/zeta/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1) to participate in Roadmap discussions!

<a href="https://github.com/kyegomez/zeta/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=kyegomez/zeta" />
</a>

----

## Accelerate Backlog
Help us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)

<a href="https://polar.sh/kyegomez"><img src="https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez" /></a>


# License 
- Apache


# Citation
```bibtex
@misc{zetascale,
    title = {Zetascale Framework},
    author = {Kye Gomez},
    year = {2024},
    howpublished = {\url{https://github.com/kyegomez/zeta}},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/zeta",
    "name": "zetascale",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering, swarms, agents, llms, transformers, multi-agent, swarms of agents, Enterprise-Grade Agents, Production-Grade Agents, Agents, Multi-Grade-Agents, Swarms, Transformers, LLMs, Prompt Engineering, Agents, Generative Agents, Generative AI, Agent Marketplace, Agent Store, LSTMS, GRUs, RNNs, CNNs, MLPs, DNNs",
    "author": "Zeta Team",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/0f/1c/fb20f3ad225fd0290fd8e31764570a5f1b4714d33f915a7f29b33231d1fa/zetascale-2.7.3.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](images/agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n![Zeta banner](images/zeta.png)\nBuild SOTA AI Models 80% faster with modular, high-performance, and scalable building blocks!\n\n[![Docs](https://readthedocs.org/projects/zeta/badge/)](https://zeta.readthedocs.io)\n\n<p>\n  <a href=\"https://github.com/kyegomez/zeta/blob/main/LICENSE\"><img alt=\"MIT License\" src=\"https://img.shields.io/badge/license-MIT-blue.svg\" /></a>\n  <a href=\"https://pypi.org/project/zetascale\"><img alt=\"MIT License\" src=\"https://badge.fury.io/py/zetascale.svg\" /></a>\n</p>\n\n[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)\n\n[![GitHub issues](https://img.shields.io/github/issues/kyegomez/zeta)](https://github.com/kyegomez/zeta/issues) [![GitHub forks](https://img.shields.io/github/forks/kyegomez/zeta)](https://github.com/kyegomez/zeta/network) [![GitHub stars](https://img.shields.io/github/stars/kyegomez/zeta)](https://github.com/kyegomez/zeta/stargazers) [![GitHub license](https://img.shields.io/github/license/kyegomez/zeta)](https://github.com/kyegomez/zeta/blob/main/LICENSE)[![GitHub star chart](https://img.shields.io/github/stars/kyegomez/zeta?style=social)](https://star-history.com/#kyegomez/zeta)[![Dependency Status](https://img.shields.io/librariesio/github/kyegomez/zeta)](https://libraries.io/github/kyegomez/zeta) [![Downloads](https://static.pepy.tech/badge/zeta/month)](https://pepy.tech/project/zetascale)\n\n[![Join the Agora discord](https://img.shields.io/discord/1110910277110743103?label=Discord&logo=discord&logoColor=white&style=plastic&color=d7b023)![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/zeta)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta) [![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta) [![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&title=&summary=&source=)\n\n[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&title=zeta%20-%20the%20future%20of%20AI) [![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&t=zeta%20-%20the%20future%20of%20AI) [![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=zeta%20-%20the%20future%20of%20AI) [![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20zeta%20-%20the%20future%20of%20AI%20%23zeta%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fzeta)\n\nAfter building out thousands of neural nets and facing the same annoying bottlenecks of chaotic codebases with no modularity and low performance modules, Zeta needed to be born to enable me and others to quickly prototype, train, and optimize the latest SOTA neural nets and deploy them into production. \n\nZeta places a radical emphasis on useability, modularity, and performance. Zeta is now currently employed in 100s of models across my github and across others. \nGet started below and LMK if you want my help building any model, I'm here for you \ud83d\ude0a \ud83d\udc9c \n\n\n# Install\n\n```bash\n$ pip3 install -U zetascale\n```\n\n# Usage\n\n## Starting Your Journey\n\nCreating a model empowered with the aforementioned breakthrough research features is a breeze. Here's how to quickly materialize the renowned Multi Query Attention\n\n```python\nimport torch\nfrom zeta import MultiQueryAttention\n\n# Model\nmodel = MultiQueryAttention(\n    dim=512,\n    heads=8,\n)\n\n# Input\ntext = torch.randn(2, 4, 512)\n\n# Output\noutput, _, _ = model(text)\nprint(output.shape)\nprint(output)\n\n```\n\n\n\n### `SwiGLU`\nThe SwiGLU activation function takes an input tensor and applies a gating mechanism to selectively pass information. It consists of two parts: the \"switch\" gate and the \"glu\" gate. The switch gate controls the flow of information, while the glu gate performs a non-linear transformation on the input.\n\n\n```python\nimport torch\n\nfrom zeta.nn import SwiGLUStacked\n\nx = torch.randn(5, 10)\nswiglu = SwiGLUStacked(10, 20)\nswiglu(x).shape\n```\n\nIn this example, we first import the necessary modules, including torch for tensor operations and SwiGLUStacked from zeta.nn for the SwiGLU activation function.\n\nWe then create a random input tensor x with a shape of (5, 10). Next, we instantiate an instance of SwiGLUStacked with an input size of 10 and an output size of 20.\n\nFinally, we pass the input tensor x to the swiglu module, which applies the SwiGLU activation function to it. The resulting output tensor is stored in the output variable. We print the shape of the output tensor to see the\n\n-------\n\n### RelativePositionBias\n- `RelativePositionBias` quantizes the distance between two positions into a certain number of buckets and then uses an embedding to get the relative position bias. This mechanism aids in the attention mechanism by providing biases based on relative positions between the query and key, rather than relying solely on their absolute positions.\n\n```python\nimport torch\nfrom torch import nn\n\nfrom zeta.nn import RelativePositionBias\n\n# Initialize the RelativePositionBias module\nrel_pos_bias = RelativePositionBias()\n\n# Example 1: Compute bias for a single batch\nbias_matrix = rel_pos_bias(1, 10, 10)\n\n\n# Example 2: Utilize in conjunction with an attention mechanism\n# NOTE: This is a mock example, and may not represent an actual attention mechanism's complete implementation.\nclass MockAttention(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.rel_pos_bias = RelativePositionBias()\n\n    def forward(self, queries, keys):\n        bias = self.rel_pos_bias(queries.size(0), queries.size(1), keys.size(1))\n        # Further computations with bias in the attention mechanism...\n        return None  # Placeholder\n\n\n# Example 3: Modify default configurations\ncustom_rel_pos_bias = RelativePositionBias(\n    bidirectional=False, num_buckets=64, max_distance=256, num_heads=8\n)\n```\n\n### `FeedForward`\nThe FeedForward module performs a feedforward operation on the input tensor x. It consists of a multi-layer perceptron (MLP) with an optional activation function and LayerNorm. \nUsed in most language, multi-modal, and modern neural networks.\n\n```python\nimport torch\n\nfrom zeta.nn import FeedForward\n\nmodel = FeedForward(256, 512, glu=True, post_act_ln=True, dropout=0.2)\n\nx = torch.randn(1, 256)\n\noutput = model(x)\nprint(output.shape)\n```\n\n### `BitLinear`\n- The BitLinear module performs linear transformation on the input data, followed by quantization and dequantization. The quantization process is performed using the absmax_quantize function, which quantizes the input tensor based on the absolute maximum value, [from the paper](https://arxiv.org/abs/2310.11453)\n```python\nimport torch\nfrom torch import nn\n\nimport zeta.quant as qt\n\n\nclass MyModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.linear = qt.BitLinear(10, 20)\n\n    def forward(self, x):\n        return self.linear(x)\n\n\n# Initialize the model\nmodel = MyModel()\n\n# Create a random tensor of size (128, 10)\ninput = torch.randn(128, 10)\n\n# Perform the forward pass\noutput = model(input)\n\n# Print the size of the output\nprint(output.size())  # torch.Size([128, 20])\n```\n\n### `PalmE`\n- This is an implementation of the multi-modal Palm-E model using a decoder llm as the backbone with an VIT image encoder to process vision, it's very similiar to GPT4, Kosmos, RTX2, and many other multi-modality model architectures\n\n```python\nimport torch\n\nfrom zeta.structs import (\n    AutoRegressiveWrapper,\n    Decoder,\n    Encoder,\n    Transformer,\n    ViTransformerWrapper,\n)\n\n\nclass PalmE(torch.nn.Module):\n    \"\"\"\n        PalmE is a transformer architecture that uses a ViT encoder and a transformer decoder.\n\n        Args:\n\n            image_size (int): Size of the image.\n            patch_size (int): Size of the patch.\n            encoder_dim (int): Dimension of the encoder.\n            encoder_depth (int): Depth of the encoder.\n            encoder_heads (int): Number of heads in the encoder.\n            num_tokens (int): Number of tokens.\n            max_seq_len (int): Maximum sequence length.\n            decoder_dim (int): Dimension of the decoder.\n            decoder_depth (int): Depth of the decoder.\n            decoder_heads (int): Number of heads in the decoder.\n            alibi_num_heads (int): Number of heads in the alibi attention.\n            attn_kv_heads (int): Number of heads in the attention key-value projection.\n            use_abs_pos_emb (bool): Whether to use absolute positional embeddings.\n            cross_attend (bool): Whether to cross attend in the decoder.\n            alibi_pos_bias (bool): Whether to use positional bias in the alibi attention.\n            rotary_xpos (bool): Whether to use rotary positional embeddings.\n            attn_flash (bool): Whether to use attention flash.\n            qk_norm (bool): Whether to normalize the query and key in the attention layer.\n\n        Returns:\n\n                torch.Tensor: The output of the model.\n\n        Usage:\n\n    img = torch.randn(1, 3, 256, 256)\n    text = torch.randint(0, 20000, (1, 1024))\n    model = PalmE()\n    output = model(img, text)\n    print(output)\n\n    \"\"\"\n\n    def __init__(\n        self,\n        image_size=256,\n        patch_size=32,\n        encoder_dim=512,\n        encoder_depth=6,\n        encoder_heads=8,\n        num_tokens=20000,\n        max_seq_len=1024,\n        decoder_dim=512,\n        decoder_depth=6,\n        decoder_heads=8,\n        alibi_num_heads=4,\n        attn_kv_heads=2,\n        use_abs_pos_emb=False,\n        cross_attend=True,\n        alibi_pos_bias=True,\n        rotary_xpos=True,\n        attn_flash=True,\n        qk_norm=True,\n    ):\n        super().__init__()\n\n        # vit architecture\n        self.encoder = ViTransformerWrapper(\n            image_size=image_size,\n            patch_size=patch_size,\n            attn_layers=Encoder(\n                dim=encoder_dim, depth=encoder_depth, heads=encoder_heads\n            ),\n        )\n\n        # palm model architecture\n        self.decoder = Transformer(\n            num_tokens=num_tokens,\n            max_seq_len=max_seq_len,\n            use_abs_pos_emb=use_abs_pos_emb,\n            attn_layers=Decoder(\n                dim=decoder_dim,\n                depth=decoder_depth,\n                heads=decoder_heads,\n                cross_attend=cross_attend,\n                alibi_pos_bias=alibi_pos_bias,\n                alibi_num_heads=alibi_num_heads,\n                rotary_xpos=rotary_xpos,\n                attn_kv_heads=attn_kv_heads,\n                attn_flash=attn_flash,\n                qk_norm=qk_norm,\n            ),\n        )\n\n        # autoregressive wrapper to enable generation of tokens\n        self.decoder = AutoRegressiveWrapper(self.decoder)\n\n    def forward(self, img: torch.Tensor, text: torch.Tensor):\n        \"\"\"Forward pass of the model.\"\"\"\n        try:\n            encoded = self.encoder(img, return_embeddings=True)\n            return self.decoder(text, context=encoded)\n        except Exception as error:\n            print(f\"Failed in forward method: {error}\")\n            raise\n\n\n# Usage with random inputs\nimg = torch.randn(1, 3, 256, 256)\ntext = torch.randint(0, 20000, (1, 1024))\n\n# Initiliaze the model\nmodel = PalmE()\noutput = model(img, text)\nprint(output)\n```\n\n\n### `Unet`\nUnet is a famous convolutional neural network architecture originally used for biomedical image segmentation but soon became the backbone of the generative AI Mega-revolution. The architecture comprises two primary pathways: downsampling and upsampling, followed by an output convolution. Due to its U-shape, the architecture is named U-Net. Its symmetric architecture ensures that the context (from downsampling) and the localization (from upsampling) are captured effectively.\n\n```python\nimport torch\n\nfrom zeta.nn import Unet\n\n# Initialize the U-Net model\nmodel = Unet(n_channels=1, n_classes=2)\n\n# Random input tensor with dimensions [batch_size, channels, height, width]\nx = torch.randn(1, 1, 572, 572)\n\n# Forward pass through the model\ny = model(x)\n\n# Output\nprint(f\"Input shape: {x.shape}\")\nprint(f\"Output shape: {y.shape}\")\n```\n\n\n### `VisionEmbeddings`\nThe VisionEmbedding class is designed for converting images into patch embeddings, making them suitable for processing by transformer-based models. This class plays a crucial role in various computer vision tasks and enables the integration of vision data into transformer architectures!\n\n```python\nimport torch\n\nfrom zeta.nn import VisionEmbedding\n\n# Create an instance of VisionEmbedding\nvision_embedding = VisionEmbedding(\n    img_size=224,\n    patch_size=16,\n    in_chans=3,\n    embed_dim=768,\n    contain_mask_token=True,\n    prepend_cls_token=True,\n)\n\n# Load an example image (3 channels, 224x224)\ninput_image = torch.rand(1, 3, 224, 224)\n\n# Perform image-to-patch embedding\noutput = vision_embedding(input_image)\n\n# The output now contains patch embeddings, ready for input to a transformer model\n```\n\n\n### `niva`\n- Niva focuses on weights of certain layers (specified by quantize_layers). Ideal for models where runtime activation is variable. \ud83d\udc41\ufe0f Example Layers: nn.Embedding, nn.LSTM. \n\n```python\nimport torch\n\nfrom zeta import niva\n\n# Load a pre-trained model\nmodel = YourModelClass()\n\n# Quantize the model dynamically, specifying layers to quantize\nniva(\n    model=model,\n    model_path=\"path_to_pretrainedim_weights.pt\",\n    output_path=\"quantizedim.pt\",\n    quant_type=\"dynamic\",\n    quantize_layers=[nn.Linear, nn.Conv2d],\n    dtype=torch.qint8,\n)\n```\n\n\n### `FusedDenseGELUDense`\n- Increase model speed by 2x with this module that fuses together 2 hyper-optimized dense ops from bits and bytes and a gelu together!\n\n```python\nimport torch\n\nfrom zeta.nn import FusedDenseGELUDense\n\nx = torch.randn(1, 512)\nmodel = FusedDenseGELUDense(512, 1024)\nout = model(x)\nout.shape\n```\n\n\n### `FusedDropoutLayerNorm`\n- FusedDropoutLayerNorm is a fused kernel of dropout and layernorm to speed up FFNs or MLPS by 2X\n\n```python\nimport torch\nfrom torch import nn\n\nfrom zeta.nn import FusedDropoutLayerNorm\n\n# Initialize the module\nmodel = FusedDropoutLayerNorm(dim=512)\n\n# Create a sample input tensor\nx = torch.randn(1, 512)\n\n# Forward pass\noutput = model(x)\n\n# Check output shape\nprint(output.shape)  # Expected: torch.Size([1, 512])\n```\n\n\n### `Mamba`\n- Pytorch implementation of the new SSM model architecture Mamba\n\n```python\nimport torch\n\nfrom zeta.nn import MambaBlock\n\n# Initialize Mamba\nblock = MambaBlock(dim=64, depth=1)\n\n# Random input\nx = torch.randn(1, 10, 64)\n\n# Apply the model to the block\ny = block(x)\n\nprint(y.shape)\n# torch.Size([1, 10, 64])\n```\n\n### `FiLM`\n\n```python\nimport torch\n\nfrom zeta.nn import Film\n\n# Initialize the Film layer\nfilm_layer = Film(dim=128, hidden_dim=64, expanse_ratio=4)\n\n# Create some dummy data for conditions and hiddens\nconditions = torch.randn(10, 128)  # Batch size is 10, feature size is 128\nhiddens = torch.randn(\n    10, 1, 128\n)  # Batch size is 10, sequence length is 1, feature size is 128\n\n# Pass the data through the Film layer\nmodulated_features = film_layer(conditions, hiddens)\n\n# Print the shape of the output\nprint(modulated_features.shape)  # Should be [10, 1, 128]\n```\n\n### `hyper_optimize`\n- A single wrapper for torch.fx, torch.script, torch.compile, dynamic quantization, mixed precision through torch.amp, with execution time metrics all in once place!\n```python\nimport torch\n\nfrom zeta.nn import hyper_optimize\n\n\n@hyper_optimize(\n    torch_fx=False,\n    torch_script=False,\n    torch_compile=True,\n    quantize=True,\n    mixed_precision=True,\n    enable_metrics=True,\n)\ndef model(x):\n    return x @ x\n\n\nout = model(torch.randn(1, 3, 32, 32))\nprint(out)\n```\n\n\n### DPO - Direct Policy Optimization\nDirect Policy Optimization employed for many RLHF applications for LLMs.\n\n```python\nimport torch\nfrom torch import nn\n\nfrom zeta.rl import DPO\n\n\n# Define a simple policy model\nclass PolicyModel(nn.Module):\n    def __init__(self, dim, output_dim):\n        super().__init__()\n        self.fc = nn.Linear(dim, output_dim)\n\n    def forward(self, x):\n        return self.fc(x)\n\n\ndim = 10\noutput_dim = 5\npolicy_model = PolicyModel(dim, output_dim)\n\n# Initialize DPO with the policy model\ndpo_model = DPO(model=policy_model, beta=0.1)\n\n# Sample preferred and unpreferred sequences\npreferred_seq = torch.randint(0, output_dim, (3, dim))\nunpreferred_seq = torch.randint(0, output_dim, (3, dim))\n\n# Compute loss\nloss = dpo_model(preferred_seq, unpreferred_seq)\nprint(loss)\n```\n\n\n## PyTorch Model Logging\n- A decorator that logs the execution of the pytorch model, including parameters, gradients, and memory usage.\n\n```python\nfrom zeta.utils import verbose_execution\n\nimport torch\nfrom torch import nn\nfrom zeta.utils.verbose_execution import verbose_execution\n\n# # Configure Loguru (optional)\n@verbose_execution(log_params=True, log_gradients=True, log_memory=True)\nclass YourPyTorchModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(3, 64, 3)\n        self.relu = nn.ReLU()\n        self.flatten = nn.Flatten()\n        self.fc = nn.Linear(64 * 222 * 222, 10)  # Adjusted input size\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.relu(x)\n        x = self.flatten(x)\n        x = self.fc(x)\n        return x\n\n# Create and use your model\nmodel = YourPyTorchModel()\ninput_tensor = torch.randn(1, 3, 224, 224)\noutput = model(input_tensor)\n\n# If you want to see gradient information, you need to perform a backward pass\nloss = output.sum()\nloss.backward()\n```\n\n\n## Sigmoid Attention\n\nAttention 18% faster with sigmoid instead of attention\n\n- replace traditional softmax in attention with a sigmoid and \n- a constant (not learned) scalar bias based on the sequence length.\n\n\n```python\nimport torch\nfrom zeta import SigmoidAttention\nfrom loguru import logger\n\nbatch_size = 32\nseq_len = 128\ndim = 512\nheads = 8\n\nx = torch.rand(batch_size, seq_len, dim)\nmask = torch.ones(batch_size, seq_len, seq_len)  # Example mask\n\nsigmoid_attn = SigmoidAttention(dim, heads, seq_len)\noutput = sigmoid_attn(x, mask)\nprint(output.shape)\n```\n\n\n\n# Documentation\nAll classes must have documentation if you see a class or function without documentation then please report it to me at kye@apac.ai,\n\nDocumentation is at [zeta.apac.ai](https://zeta.apac.ai/)\n\n\n-------\n\n\n# Running tests\nYou should install the pre-commit hooks with pre-commit install. This will run the linter, mypy, and a subset of the tests on every commit.\n\nFor more examples on how to run the full test suite please refer to the CI workflow.\n\nSome examples of running tests locally:\n\n```bash\npython3 -m pip install -e '.[testing]'  # install extra deps for testing\npython3 -m pytest tests/                 # whole test suite\n```\n----\n\n## Community\n\nJoin our growing community around the world, for real-time support, ideas, and discussions on how to build better models \ud83d\ude0a \n\n- View our official [Docs](https://zeta.apac.ai)\n- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)\n- Follow us on [Twitter](https://twitter.com/kyegomez)\n- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)\n- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)\n- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)\n\n---\n\n# \ud83e\udd1d Schedule a 1-on-1 Session\nWant to train a custom AI model for a real-world task like General Multi-Modal Models, Facial Recognitions, Drug Discovery, Humanoid Robotics? I'll help you create the model architecture then train the model and then optimize it to meet your quality assurance standards.\n\nBook a [1-on-1 Session with Kye here.](https://calendly.com/apacai/agora), the Creator, to discuss any issues, provide feedback, or explore how we can improve Zeta for you or help you build your own custom models!\n\n## \ud83e\udef6 Contributions:\n\nThe easiest way to contribute is to pick any issue with the `good first issue` tag \ud83d\udcaa. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/kyegomez/zeta/issues/new/choose) | Feature Request? [File here](https://github.com/kyegomez/zeta/issues/new/choose)\n\nZeta is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/zeta/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1) to participate in Roadmap discussions!\n\n<a href=\"https://github.com/kyegomez/zeta/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=kyegomez/zeta\" />\n</a>\n\n----\n\n## Accelerate Backlog\nHelp us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)\n\n<a href=\"https://polar.sh/kyegomez\"><img src=\"https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez\" /></a>\n\n\n# License \n- Apache\n\n\n# Citation\n```bibtex\n@misc{zetascale,\n    title = {Zetascale Framework},\n    author = {Kye Gomez},\n    year = {2024},\n    howpublished = {\\url{https://github.com/kyegomez/zeta}},\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Rapidly Build, Optimize, and Train SOTA AI Models",
    "version": "2.7.3",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/zeta"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " optimizers",
        " prompt engineering",
        " swarms",
        " agents",
        " llms",
        " transformers",
        " multi-agent",
        " swarms of agents",
        " enterprise-grade agents",
        " production-grade agents",
        " agents",
        " multi-grade-agents",
        " swarms",
        " transformers",
        " llms",
        " prompt engineering",
        " agents",
        " generative agents",
        " generative ai",
        " agent marketplace",
        " agent store",
        " lstms",
        " grus",
        " rnns",
        " cnns",
        " mlps",
        " dnns"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4e0cde2f3f8ec0fd81991498309c21cd71f4761dd38be3e7b8954687c18754fe",
                "md5": "f4235f5f02f67d9771f2dbcde84fe54b",
                "sha256": "34fc568eddaef627a52e65e2d66d9a039d5395c87bfed30dee9dd6cce784f175"
            },
            "downloads": -1,
            "filename": "zetascale-2.7.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f4235f5f02f67d9771f2dbcde84fe54b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 531150,
            "upload_time": "2024-09-09T17:48:20",
            "upload_time_iso_8601": "2024-09-09T17:48:20.152294Z",
            "url": "https://files.pythonhosted.org/packages/4e/0c/de2f3f8ec0fd81991498309c21cd71f4761dd38be3e7b8954687c18754fe/zetascale-2.7.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0f1cfb20f3ad225fd0290fd8e31764570a5f1b4714d33f915a7f29b33231d1fa",
                "md5": "0cf26c11ab1e2a9fb5d3be7ca755e132",
                "sha256": "9bde4e9d2d3f3c228e3ba18145c519edbcc8a7b0df4d9f812da8c6986e1319ca"
            },
            "downloads": -1,
            "filename": "zetascale-2.7.3.tar.gz",
            "has_sig": false,
            "md5_digest": "0cf26c11ab1e2a9fb5d3be7ca755e132",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 394328,
            "upload_time": "2024-09-09T17:48:22",
            "upload_time_iso_8601": "2024-09-09T17:48:22.254855Z",
            "url": "https://files.pythonhosted.org/packages/0f/1c/fb20f3ad225fd0290fd8e31764570a5f1b4714d33f915a7f29b33231d1fa/zetascale-2.7.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-09 17:48:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "zeta",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "zetascale"
}
        
Elapsed time: 0.34240s