mh-moe


Namemh-moe JSON
Version 0.0.2 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/MHMoE
SummaryPaper - Pytorch
upload_time2024-04-27 00:01:23
maintainerNone
docs_urlNone
authorKye Gomez
requires_python<4.0,>=3.6
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# Multi-Head Mixture of Experts (MHMoE)

MH-MoE to collectively attend to information from various representation
spaces within different experts to deepen context understanding while significantly enhancing expert activation. 

## install
`pip3 install mh-moe`


## usage
```python
import torch
from mh_moe.main import MHMoE

# Define model parameters
dim = 512
heads = 8
num_experts = 4
num_layers = 3

# Create MHMoE model instance
model = MHMoE(dim, heads, num_experts, num_layers)

# Generate dummy input
batch_size = 10
seq_length = 20
dummy_input = torch.rand(batch_size, seq_length, dim)
dummy_mask = torch.ones(batch_size, seq_length)  # Example mask

# Forward pass through the model
output = model(dummy_input, dummy_mask)

# Print output and its shape
print(output)
print(output.shape)
```
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/MHMoE",
    "name": "mh-moe",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.6",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/c3/27/b11a07721e0f2eedc06e955a5008b7261c52624016c41e74aaa0acb22a04/mh_moe-0.0.2.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# Multi-Head Mixture of Experts (MHMoE)\n\nMH-MoE to collectively attend to information from various representation\nspaces within different experts to deepen context understanding while significantly enhancing expert activation. \n\n## install\n`pip3 install mh-moe`\n\n\n## usage\n```python\nimport torch\nfrom mh_moe.main import MHMoE\n\n# Define model parameters\ndim = 512\nheads = 8\nnum_experts = 4\nnum_layers = 3\n\n# Create MHMoE model instance\nmodel = MHMoE(dim, heads, num_experts, num_layers)\n\n# Generate dummy input\nbatch_size = 10\nseq_length = 20\ndummy_input = torch.rand(batch_size, seq_length, dim)\ndummy_mask = torch.ones(batch_size, seq_length)  # Example mask\n\n# Forward pass through the model\noutput = model(dummy_input, dummy_mask)\n\n# Print output and its shape\nprint(output)\nprint(output.shape)\n```",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Paper - Pytorch",
    "version": "0.0.2",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/MHMoE",
        "Homepage": "https://github.com/kyegomez/MHMoE",
        "Repository": "https://github.com/kyegomez/MHMoE"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " optimizers",
        " prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ef11326c2c8ebb4ee326644183a3ba19b0fbf69cafa69e848fc6ccf9be47dfcb",
                "md5": "af965822c8e7695281e3f4d739af056b",
                "sha256": "b984be79496acf7cd3ab503ecb5c06b84f456a3f5599adae7558ea2a24ae35e7"
            },
            "downloads": -1,
            "filename": "mh_moe-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "af965822c8e7695281e3f4d739af056b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.6",
            "size": 4942,
            "upload_time": "2024-04-27T00:01:21",
            "upload_time_iso_8601": "2024-04-27T00:01:21.906977Z",
            "url": "https://files.pythonhosted.org/packages/ef/11/326c2c8ebb4ee326644183a3ba19b0fbf69cafa69e848fc6ccf9be47dfcb/mh_moe-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c327b11a07721e0f2eedc06e955a5008b7261c52624016c41e74aaa0acb22a04",
                "md5": "504c72e206e37dd53c307649e302bd5e",
                "sha256": "2378d464f54c207ed129e57aa3b83ece6c3c7d675898a9bf26a1dc8d4c5afa94"
            },
            "downloads": -1,
            "filename": "mh_moe-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "504c72e206e37dd53c307649e302bd5e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.6",
            "size": 5066,
            "upload_time": "2024-04-27T00:01:23",
            "upload_time_iso_8601": "2024-04-27T00:01:23.504803Z",
            "url": "https://files.pythonhosted.org/packages/c3/27/b11a07721e0f2eedc06e955a5008b7261c52624016c41e74aaa0acb22a04/mh_moe-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-27 00:01:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "MHMoE",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "mh-moe"
}
        
Elapsed time: 0.30854s