hrtx


Namehrtx JSON
Version 0.0.8 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/hrtx
SummaryHRTX - Pytorch
upload_time2024-02-23 18:26:37
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements torch einops zetascale
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# HRTX
Hivemind Multi-Modality Transformer (HMMT) Model Architecture. Multi-Modality Model that can ingest text and video modalities from an x amount of models at the same time and then output instructions for each robot or any N number of robots.


## Install
`pip install hrtx`

## Usage

### `EarlyExitTransformer` 
There are various output heads after every transformer forward pass.

```python
import torch 
from hrtx.ee_network import EarlyExitTransformer

# Input tensor - tokens int
token = torch.randint(0, 1000, (1, 10))

# Create the model
model = EarlyExitTransformer(
    dim=512, depth=6,
    num_tokens=1000, seq_len=10,
    heads=8, dim_head=64, num_robots=3
)


# Forward pass
output = model(token)

# Print the output shape
print(output.shape)

# Output: torch.Size([1, 10, 512])

```



### `MIMOTransformer`
Multi-Input Multi-Output Transformer model for parallel robotic execution.

```python
import torch 
from hrtx.mimo import MIMOTransformer

# Input tensor
x = torch.randn(1, 10, 512)
x = [x, x, x]

# Create the model
model = MIMOTransformer(
    dim=512, depth=6, heads=8, dim_head=64, num_robots=3
)


# Forward pass
output = model(x)

# Print the output shape
print(output)

# Output: torch.Size([1, 10, 512])
```



### `SAETransformer`
SAE: Multiple Inputs for every after then an output head.

```python
import torch
from hrtx.sae_transformer import SAETransformer

# Input tensor
x = torch.randn(1, 10, 512)
x = [x, x, x]

# Create the model
model = SAETransformer(
    dim=512, depth=6, heads=8, dim_head=64, num_robots=3
)


# Forward pass
output = model(x)

# Print the output shape
print(output)

# Output: torch.Size([1, 10, 512])

```


### `MIMMO`
```python
import torch
from hrtx.mimmo import MIMMO


# Usage of the MIMMO module
x = [torch.randint(0, 1000, (1, 10)) for _ in range(3)]


# Create the model
model = MIMMO(
    dim=512,
    depth=6,
    num_tokens=1000,
    seq_len=10,
    heads=8,
    dim_head=64,
    num_robots=3,
)

# Forward pass
output = model(x)

# Print the output shape
print(output[0].shape)
``` 



**Hivemind Multi-Modality Transformer (HMMT) Model Architecture Specification**

### Objective:
Design the model architecture for a Hivemind Multi-Modality Transformer that can accept multi-modal inputs from 'x' number of robots and send instructions to all robots, a single robot, or any selected number of robots.

### Features:

1. **Multi-Modal Embedding Layers**:
   - Distinct embedding layers tailored for each modality (e.g., vision, audio, sensor data).
   - Fusion mechanisms to cohesively merge embeddings into a comprehensive representation.

2. **Dynamic Input Channels**:
   - Ability to automatically adapt the number of input channels based on 'x' robots.
   - Channel attention mechanisms that assign weights to the significance of each robot's input.

3. **Hierarchical Attention Mechanisms**:
   - Multilayer attention that can concentrate on specific modalities or individual robots.
   - Global attention modules for a holistic scene or context comprehension.

4. **Adaptive Computation**:
   - Layers designed to modulate computations based on input intricacy.
   - Streamlined processing pathways for rudimentary tasks, with deeper computations allocated for intricate tasks.

5. **Output Decoders**:
   - Multiple decoders tailored for various types of instructions (e.g., navigation, task-specific commands).
   - Multi-head output configuration for concurrent instruction formulation for diverse robots.

6. **Latency Optimization**:
   - Fast-track routes for immediate instruction delivery.
   - Asynchronous processing units to handle non-immediate tasks.

7. **Robustness & Generalization**:
   - Embedded mechanisms to ensure model resilience against diverse input types.
   - Capacity to handle and process noisy or unexpected inputs without faltering.

8. **Model Parallelism & Scalability**:
   - Distributed model design to cater to a vast number of robot inputs efficiently.
   - Individual micro-models for each robot that operate concurrently.

Transformers, since their introduction in the "Attention is All You Need" paper, have revolutionized the field of deep learning. Initially designed for sequence-to-sequence tasks like language translation, they've now found relevance in a variety of domains, including computer vision, where they're termed as Vision Transformers (ViTs).

In the context of our HMMT, the transformer architecture serves as the backbone. Its self-attention mechanism allows it to weigh the significance of various inputs relative to each other, making it perfect for multi-modal data and inputs from multiple robots.

#### 1. **Transformers in Multi-Modal Embeddings**:

For each modality (e.g., vision, audio, sensor data), we first transform raw inputs into embeddings. Transformers can be employed here in two main ways:

- **Sequential Transformers**: Each modality's data, which can often be sequential (like a series of sensor readings or words in a command), is fed into a transformer. This transformer learns the inherent sequence patterns and produces a contextual embedding for the entire sequence.

- **Cross-Modal Attention**: Once individual modalities have their embeddings, a higher-order transformer can be used to establish attention across modalities. This means the model can understand, for instance, that a visual input of a red light might be highly relevant to an audio input of a siren.

#### 2. **Dynamic Channel Adjuster with Transformers**:

The ability to dynamically adjust to varying robot counts is crucial. Here, transformers play a pivotal role:

- **Channel-wise Self-Attention**: For each robot input channel, a transformer layer assesses the importance of that channel in the context of all other channels. It provides a weighted representation, emphasizing more crucial channels and dampening less relevant ones.

#### 3. **Hierarchical Attention Mechanisms**:

The strength of transformers lies in their ability to handle various scales of attention:

- **Local Attention**: For tasks like image recognition in a visual modality, transformers can focus on local patterns (like the shape of an object).

- **Global Attention**: For understanding the broader context (like the overall scene in a visual feed or the overarching command in a textual instruction), transformers can spread their attention globally.

By stacking multiple transformer layers, we form a hierarchy. The initial layers focus on local patterns, while deeper layers capture broader contexts. This hierarchical structure is beneficial for multi-modal data as it helps in bridging local features of one modality with global features of another.

#### 4. **ViTs in Multi-Modal Fusion Blocks**:

Vision Transformers (ViTs) divide an image into fixed-size non-overlapping patches, linearly embed them, and then feed them into a standard transformer. In the context of HMMT:

- **Patch-based Embedding**: For each robot's visual feed, ViTs can extract crucial visual patches, allowing the model to focus on significant parts of the visual data, like an object of interest or a particular gesture.

- **Fused Visual Representation**: The output of the ViT for each visual feed is a rich representation that can be fused with embeddings from other modalities using subsequent transformer layers.

#### 5. **Latency Optimized Fast-Track Routes with Transformers**:

For scenarios demanding immediate response, we introduce fast-track transformer layers:

- **Shallow Transformers**: Instead of passing data through the entire depth of the model, shallow transformer layers can quickly process and produce outputs. These layers are trained to handle frequent and time-critical scenarios.

#### 6. **Multi-Headed Decoders with Transformer Heads**:

The concept of multi-headed attention in transformers can be extended to our decoders:

- **Task-specific Heads**: Each head can be tailored for a specific type of instruction. For instance, one head can focus on navigation, while another can handle task-specific commands.

- **Conditional Parallelism in Heads**: Depending on the input, certain heads can be activated while others remain dormant. This dynamic activation ensures efficient computation.

---

### Incorporating Model Parallelism with Transformers:

Given the inherent parallel nature of transformers, where each token or patch attends to every other token or patch, we can leverage model parallelism:

- **Micro-Transformers for Each Robot**: Each robot's data is first processed by a dedicated micro-transformer. These micro-models run concurrently, ensuring scalability.

- **Distributed Attention Computation**: The self-attention computation, which is quadratic with respect to the input length, can be distributed across multiple GPUs or TPUs.

---

### Conclusion:

The Hivemind Multi-Modality Transformer, with its deep integration of transformers and vision transformers, stands poised to redefine multi-robot control systems. The architecture leverages the strengths of transformers to handle diverse modalities, dynamically adjust to varying robot inputs, and produce precise instructions. With the added benefits of model parallelism, the HMMT ensures scalability, making it a promising solution for future swarm robotics and large-scale multi-robot systems.

The architecture of the Hivemind Multi-Modality Transformer is conceptualized to stand at the forefront of multi-robot control, promising efficient, adaptive, and secure interactions across varied scenarios.


# License
MIT




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/hrtx",
    "name": "hrtx",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/3b/d1/3a54e5e9990042c0328fd73d3064e7d014b2c9520babc98ea1b143bdf4ba/hrtx-0.0.8.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# HRTX\nHivemind Multi-Modality Transformer (HMMT) Model Architecture. Multi-Modality Model that can ingest text and video modalities from an x amount of models at the same time and then output instructions for each robot or any N number of robots.\n\n\n## Install\n`pip install hrtx`\n\n## Usage\n\n### `EarlyExitTransformer` \nThere are various output heads after every transformer forward pass.\n\n```python\nimport torch \nfrom hrtx.ee_network import EarlyExitTransformer\n\n# Input tensor - tokens int\ntoken = torch.randint(0, 1000, (1, 10))\n\n# Create the model\nmodel = EarlyExitTransformer(\n    dim=512, depth=6,\n    num_tokens=1000, seq_len=10,\n    heads=8, dim_head=64, num_robots=3\n)\n\n\n# Forward pass\noutput = model(token)\n\n# Print the output shape\nprint(output.shape)\n\n# Output: torch.Size([1, 10, 512])\n\n```\n\n\n\n### `MIMOTransformer`\nMulti-Input Multi-Output Transformer model for parallel robotic execution.\n\n```python\nimport torch \nfrom hrtx.mimo import MIMOTransformer\n\n# Input tensor\nx = torch.randn(1, 10, 512)\nx = [x, x, x]\n\n# Create the model\nmodel = MIMOTransformer(\n    dim=512, depth=6, heads=8, dim_head=64, num_robots=3\n)\n\n\n# Forward pass\noutput = model(x)\n\n# Print the output shape\nprint(output)\n\n# Output: torch.Size([1, 10, 512])\n```\n\n\n\n### `SAETransformer`\nSAE: Multiple Inputs for every after then an output head.\n\n```python\nimport torch\nfrom hrtx.sae_transformer import SAETransformer\n\n# Input tensor\nx = torch.randn(1, 10, 512)\nx = [x, x, x]\n\n# Create the model\nmodel = SAETransformer(\n    dim=512, depth=6, heads=8, dim_head=64, num_robots=3\n)\n\n\n# Forward pass\noutput = model(x)\n\n# Print the output shape\nprint(output)\n\n# Output: torch.Size([1, 10, 512])\n\n```\n\n\n### `MIMMO`\n```python\nimport torch\nfrom hrtx.mimmo import MIMMO\n\n\n# Usage of the MIMMO module\nx = [torch.randint(0, 1000, (1, 10)) for _ in range(3)]\n\n\n# Create the model\nmodel = MIMMO(\n    dim=512,\n    depth=6,\n    num_tokens=1000,\n    seq_len=10,\n    heads=8,\n    dim_head=64,\n    num_robots=3,\n)\n\n# Forward pass\noutput = model(x)\n\n# Print the output shape\nprint(output[0].shape)\n``` \n\n\n\n**Hivemind Multi-Modality Transformer (HMMT) Model Architecture Specification**\n\n### Objective:\nDesign the model architecture for a Hivemind Multi-Modality Transformer that can accept multi-modal inputs from 'x' number of robots and send instructions to all robots, a single robot, or any selected number of robots.\n\n### Features:\n\n1. **Multi-Modal Embedding Layers**:\n   - Distinct embedding layers tailored for each modality (e.g., vision, audio, sensor data).\n   - Fusion mechanisms to cohesively merge embeddings into a comprehensive representation.\n\n2. **Dynamic Input Channels**:\n   - Ability to automatically adapt the number of input channels based on 'x' robots.\n   - Channel attention mechanisms that assign weights to the significance of each robot's input.\n\n3. **Hierarchical Attention Mechanisms**:\n   - Multilayer attention that can concentrate on specific modalities or individual robots.\n   - Global attention modules for a holistic scene or context comprehension.\n\n4. **Adaptive Computation**:\n   - Layers designed to modulate computations based on input intricacy.\n   - Streamlined processing pathways for rudimentary tasks, with deeper computations allocated for intricate tasks.\n\n5. **Output Decoders**:\n   - Multiple decoders tailored for various types of instructions (e.g., navigation, task-specific commands).\n   - Multi-head output configuration for concurrent instruction formulation for diverse robots.\n\n6. **Latency Optimization**:\n   - Fast-track routes for immediate instruction delivery.\n   - Asynchronous processing units to handle non-immediate tasks.\n\n7. **Robustness & Generalization**:\n   - Embedded mechanisms to ensure model resilience against diverse input types.\n   - Capacity to handle and process noisy or unexpected inputs without faltering.\n\n8. **Model Parallelism & Scalability**:\n   - Distributed model design to cater to a vast number of robot inputs efficiently.\n   - Individual micro-models for each robot that operate concurrently.\n\nTransformers, since their introduction in the \"Attention is All You Need\" paper, have revolutionized the field of deep learning. Initially designed for sequence-to-sequence tasks like language translation, they've now found relevance in a variety of domains, including computer vision, where they're termed as Vision Transformers (ViTs).\n\nIn the context of our HMMT, the transformer architecture serves as the backbone. Its self-attention mechanism allows it to weigh the significance of various inputs relative to each other, making it perfect for multi-modal data and inputs from multiple robots.\n\n#### 1. **Transformers in Multi-Modal Embeddings**:\n\nFor each modality (e.g., vision, audio, sensor data), we first transform raw inputs into embeddings. Transformers can be employed here in two main ways:\n\n- **Sequential Transformers**: Each modality's data, which can often be sequential (like a series of sensor readings or words in a command), is fed into a transformer. This transformer learns the inherent sequence patterns and produces a contextual embedding for the entire sequence.\n\n- **Cross-Modal Attention**: Once individual modalities have their embeddings, a higher-order transformer can be used to establish attention across modalities. This means the model can understand, for instance, that a visual input of a red light might be highly relevant to an audio input of a siren.\n\n#### 2. **Dynamic Channel Adjuster with Transformers**:\n\nThe ability to dynamically adjust to varying robot counts is crucial. Here, transformers play a pivotal role:\n\n- **Channel-wise Self-Attention**: For each robot input channel, a transformer layer assesses the importance of that channel in the context of all other channels. It provides a weighted representation, emphasizing more crucial channels and dampening less relevant ones.\n\n#### 3. **Hierarchical Attention Mechanisms**:\n\nThe strength of transformers lies in their ability to handle various scales of attention:\n\n- **Local Attention**: For tasks like image recognition in a visual modality, transformers can focus on local patterns (like the shape of an object).\n\n- **Global Attention**: For understanding the broader context (like the overall scene in a visual feed or the overarching command in a textual instruction), transformers can spread their attention globally.\n\nBy stacking multiple transformer layers, we form a hierarchy. The initial layers focus on local patterns, while deeper layers capture broader contexts. This hierarchical structure is beneficial for multi-modal data as it helps in bridging local features of one modality with global features of another.\n\n#### 4. **ViTs in Multi-Modal Fusion Blocks**:\n\nVision Transformers (ViTs) divide an image into fixed-size non-overlapping patches, linearly embed them, and then feed them into a standard transformer. In the context of HMMT:\n\n- **Patch-based Embedding**: For each robot's visual feed, ViTs can extract crucial visual patches, allowing the model to focus on significant parts of the visual data, like an object of interest or a particular gesture.\n\n- **Fused Visual Representation**: The output of the ViT for each visual feed is a rich representation that can be fused with embeddings from other modalities using subsequent transformer layers.\n\n#### 5. **Latency Optimized Fast-Track Routes with Transformers**:\n\nFor scenarios demanding immediate response, we introduce fast-track transformer layers:\n\n- **Shallow Transformers**: Instead of passing data through the entire depth of the model, shallow transformer layers can quickly process and produce outputs. These layers are trained to handle frequent and time-critical scenarios.\n\n#### 6. **Multi-Headed Decoders with Transformer Heads**:\n\nThe concept of multi-headed attention in transformers can be extended to our decoders:\n\n- **Task-specific Heads**: Each head can be tailored for a specific type of instruction. For instance, one head can focus on navigation, while another can handle task-specific commands.\n\n- **Conditional Parallelism in Heads**: Depending on the input, certain heads can be activated while others remain dormant. This dynamic activation ensures efficient computation.\n\n---\n\n### Incorporating Model Parallelism with Transformers:\n\nGiven the inherent parallel nature of transformers, where each token or patch attends to every other token or patch, we can leverage model parallelism:\n\n- **Micro-Transformers for Each Robot**: Each robot's data is first processed by a dedicated micro-transformer. These micro-models run concurrently, ensuring scalability.\n\n- **Distributed Attention Computation**: The self-attention computation, which is quadratic with respect to the input length, can be distributed across multiple GPUs or TPUs.\n\n---\n\n### Conclusion:\n\nThe Hivemind Multi-Modality Transformer, with its deep integration of transformers and vision transformers, stands poised to redefine multi-robot control systems. The architecture leverages the strengths of transformers to handle diverse modalities, dynamically adjust to varying robot inputs, and produce precise instructions. With the added benefits of model parallelism, the HMMT ensures scalability, making it a promising solution for future swarm robotics and large-scale multi-robot systems.\n\nThe architecture of the Hivemind Multi-Modality Transformer is conceptualized to stand at the forefront of multi-robot control, promising efficient, adaptive, and secure interactions across varied scenarios.\n\n\n# License\nMIT\n\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "HRTX - Pytorch",
    "version": "0.0.8",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/HRTX",
        "Homepage": "https://github.com/kyegomez/hrtx",
        "Repository": "https://github.com/kyegomez/hrtx"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2e4eeec2e436f1f8f1054a80e03cfb3b6cd0065062ed3aa37d7242bf3d2f0a5f",
                "md5": "ed777c0ebaa6ca806ec63a5d274cc4ae",
                "sha256": "11c6114f04ce4c4187cde6d1f5fdbb3bf20f725eaf7b4f3c896dae3bfbfbe3e7"
            },
            "downloads": -1,
            "filename": "hrtx-0.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ed777c0ebaa6ca806ec63a5d274cc4ae",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 17571,
            "upload_time": "2024-02-23T18:26:34",
            "upload_time_iso_8601": "2024-02-23T18:26:34.660681Z",
            "url": "https://files.pythonhosted.org/packages/2e/4e/eec2e436f1f8f1054a80e03cfb3b6cd0065062ed3aa37d7242bf3d2f0a5f/hrtx-0.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3bd13a54e5e9990042c0328fd73d3064e7d014b2c9520babc98ea1b143bdf4ba",
                "md5": "6690177cba5532c6d8f1226cf6f65231",
                "sha256": "77966469febcf1d550417f3c0d87c8b0faec853507c3a41c9ea05799b121f100"
            },
            "downloads": -1,
            "filename": "hrtx-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "6690177cba5532c6d8f1226cf6f65231",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 18464,
            "upload_time": "2024-02-23T18:26:37",
            "upload_time_iso_8601": "2024-02-23T18:26:37.970245Z",
            "url": "https://files.pythonhosted.org/packages/3b/d1/3a54e5e9990042c0328fd73d3064e7d014b2c9520babc98ea1b143bdf4ba/hrtx-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-23 18:26:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "hrtx",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": []
        },
        {
            "name": "einops",
            "specs": []
        },
        {
            "name": "zetascale",
            "specs": []
        }
    ],
    "lcname": "hrtx"
}
        
Elapsed time: 0.20308s