swarms-torch


Nameswarms-torch JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/swarms-pytorch
Summaryswarms-torch - Pytorch
upload_time2024-01-21 23:43:15
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements torch einops pandas zetascale pytest mkdocs mkdocs-material mkdocs-glightbox
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# Swarms in Torch
Swarms in Torch exclusivley hosts a vast array of 100% novel swarming models. Our purpose for this repo is to create, optimize, and train novel foundation models that outperform the status quo of model architectures such as the Transformer and SSM model architectures. We provide implementations of various novel models like PSO with transformers as particles, ant colony with transformers as ants, a basic NN with transformers as neurons, Mixture of Mambas and many more. If you would like to help contribute to the future of AI model architecture's please join Agora, the open source lab here. And, if you have any idea's please submit them as issues and notify me.


## Installation

```bash
pip3 install swarms-torch
```

# Usage

### ParticleSwarmOptimization

```python
from swarms_torch import ParticleSwarmOptimization


pso = ParticleSwarmOptimization(goal="Attention is all you need", n_particles=100)

pso.optimize(iterations=1000)
```

### Ant Colony Optimization
```python
from swarms_torch.ant_colony_swarm import AntColonyOptimization

# Usage:
goal_string = "Hello ACO"
aco = AntColonyOptimization(goal_string, num_iterations=1000)
best_solution = aco.optimize()
print("Best Matched String:", best_solution)

```

### Neural Network with Transformers as synapases
```python
import torch
from swarms_torch.nnt import NNTransformer

x = torch.randn(1, 10)

network = NNTransformer(
    neuron_count = 5, 
    num_states = 10,
    input_dim = 10,
    output_dim = 10,
    nhead = 2,
)
output = network(x)
print(output)
```

### CellularSwarm
a Cellular Neural Net with transformers as cells, time simulation, and a local neighboorhood!

```python
from swarms_torch import CellularSwarm 

x = torch.randn(10, 32, 512)  # sequence length of 10, batch size of 32, embedding size of 512
model = CellularSwarm(cell_count=5, input_dim=512, nhead=8)
output = model(x)

```
### Fish School/Sakana
- An all-new innovative approaches to machine learning that leverage the power of the Transformer model architecture. These systems are designed to mimic the behavior of a school of fish, where each fish represents an individual Transformer model. The goal is to optimize the performance of the entire school by learning from the best-performing fish.

```python
import torch
from swarms_torch.fish_school import Fish, FishSchool

# Create random source and target sequences
src = torch.randn(10, 32, 512)
tgt = torch.randn(10, 32, 512)

# Create random labels
labels = torch.randint(0, 512, (10, 32))

# Create a fish and train it on the random data
fish = Fish(512, 8, 6)
fish.train(src, tgt, labels)
print(fish.food)  # Print the fish's food

# Create a fish school and optimize it on the random data
school = FishSchool(10, 512, 8, 6, 100)
school.forward(src, tgt, labels)
print(school.fish[0].food)  # Print the first fish's food

```

### Swarmalators
```python
from swarms_torch import visualize_swarmalators, simulate_swarmalators

# Init for Swarmalator
# Example usage:
N = 100
J, alpha, beta, gamma, epsilon_a, epsilon_r, R = [0.1] * 7
D = 3  # Ensure D is an integer
xi, sigma_i = simulate_swarmalators(
    N, J, alpha, beta, gamma, epsilon_a, epsilon_r, R, D
)


# Call the visualization function
visualize_swarmalators(xi)
```

### Mixture of Mambas
- An 100% novel implementation of a swarm of MixtureOfMambas.
- Various fusion methods through averages, weighted_aggegrate, and more to come like a gating mechanism or other various methods.
- fusion methods: average, weighted, absmax, weighted_softmax, or your own custom function

```python
import torch
from swarms_torch import MixtureOfMambas

# Create a 3D tensor for text
x = torch.rand(1, 512, 512)

# Create an instance of the MixtureOfMambas model
model = MixtureOfMambas(
    num_mambas=2,            # Number of Mambas in the model
    dim=512,                 # Dimension of the input tensor
    d_state=1024,            # Dimension of the hidden state
    depth=4,                 # Number of layers in the model
    d_conv=1024,             # Dimension of the convolutional layers
    expand=4,                # Expansion factor for the model
    fusion_method="absmax",  # Fusion method for combining Mambas' outputs
    custom_fusion_func=None  # Custom fusion function (if any)
)

# Pass the input tensor through the model and print the output shape
print(model(x).shape)

```


### `SwitchMoE`

```python
import torch 
from swarms_torch import SwitchMoE

# Example usage:
input_dim = 768  # Dimension of input tokens
hidden_dim = 2048  # Hidden dimension of experts
output_dim = 768  # Output dimension, should match input dimension for residual connection
num_experts = 16  # Number of experts

moe_layer = SwitchMoE(
    dim=input_dim,
    hidden_dim=hidden_dim,
    output_dim=output_dim,
    num_experts=num_experts,
    use_aux_loss=False,
)

# Create a sample input tensor (batch_size, seq_len, input_dim)
x = torch.rand(32, 128, input_dim)

# Forward pass through the MoE layer with auxiliary loss computation
output, auxiliary_loss = moe_layer(x)

# Now, 'output' contains the MoE output, and 'auxiliary_loss' contains the load balancing loss.
# This auxiliary loss should be added to the main loss function during training.

print(output)
print(auxiliary_loss)
```
### SimpleMoE
A very simple Mixture of Experts with softmax as a gating mechanism.

```python
import torch 
from swarms_torch import SimpleMoE

# Example usage:
input_dim = 512  # Dimension of input tokens
hidden_dim = 1024  # Hidden dimension of experts
output_dim = 512  # Output dimension, should match input dimension for residual connection
num_experts = 4  # Number of experts

moe = SimpleMoE(input_dim, hidden_dim, output_dim, num_experts)

# Create a sample input tensor (batch_size, seq_len, input_dim)
x = torch.rand(10, 16, input_dim)

# Forward pass through the MoE layer
output = moe(x)
print(output)
```


# Documentation
- [Click here for documentation](https://swarmstorch.readthedocs.io/en/latest/swarms/)

# Examples
- There are various scripts in the playground folder with various examples for each swarm, like ant colony and fish school and spiral optimization.


## 🫶 Contributions:

The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)

Swarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms-pytorch/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/9) to participate in Roadmap discussions!

<a href="https://github.com/kyegomez/swarms-pytorch/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=kyegomez/swarms-pytorch" />
</a>

----

## Community

Join our growing community around the world, for real-time support, ideas, and discussions on Swarms 😊 

- View our official [Blog](https://swarms.apac.ai)
- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)
- Follow us on [Twitter](https://twitter.com/kyegomez)
- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)
- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)
- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)

---

## Discovery Call
Book a discovery call to learn how Swarms can lower your operating costs by 40% with swarms of autonomous agents in lightspeed. [Click here to book a time that works for you!](https://calendly.com/swarm-corp/30min?month=2023-11)

## Accelerate Backlog
Help us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)

<a href="https://polar.sh/kyegomez"><img src="https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez" /></a>

# License
MIT
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/swarms-pytorch",
    "name": "swarms-torch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/37/fe/6bddb210720187eb05efaa3a3bef66e86893517b53834e31f909ce9cffd9/swarms_torch-0.2.1.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# Swarms in Torch\nSwarms in Torch exclusivley hosts a vast array of 100% novel swarming models. Our purpose for this repo is to create, optimize, and train novel foundation models that outperform the status quo of model architectures such as the Transformer and SSM model architectures. We provide implementations of various novel models like PSO with transformers as particles, ant colony with transformers as ants, a basic NN with transformers as neurons, Mixture of Mambas and many more. If you would like to help contribute to the future of AI model architecture's please join Agora, the open source lab here. And, if you have any idea's please submit them as issues and notify me.\n\n\n## Installation\n\n```bash\npip3 install swarms-torch\n```\n\n# Usage\n\n### ParticleSwarmOptimization\n\n```python\nfrom swarms_torch import ParticleSwarmOptimization\n\n\npso = ParticleSwarmOptimization(goal=\"Attention is all you need\", n_particles=100)\n\npso.optimize(iterations=1000)\n```\n\n### Ant Colony Optimization\n```python\nfrom swarms_torch.ant_colony_swarm import AntColonyOptimization\n\n# Usage:\ngoal_string = \"Hello ACO\"\naco = AntColonyOptimization(goal_string, num_iterations=1000)\nbest_solution = aco.optimize()\nprint(\"Best Matched String:\", best_solution)\n\n```\n\n### Neural Network with Transformers as synapases\n```python\nimport torch\nfrom swarms_torch.nnt import NNTransformer\n\nx = torch.randn(1, 10)\n\nnetwork = NNTransformer(\n    neuron_count = 5, \n    num_states = 10,\n    input_dim = 10,\n    output_dim = 10,\n    nhead = 2,\n)\noutput = network(x)\nprint(output)\n```\n\n### CellularSwarm\na Cellular Neural Net with transformers as cells, time simulation, and a local neighboorhood!\n\n```python\nfrom swarms_torch import CellularSwarm \n\nx = torch.randn(10, 32, 512)  # sequence length of 10, batch size of 32, embedding size of 512\nmodel = CellularSwarm(cell_count=5, input_dim=512, nhead=8)\noutput = model(x)\n\n```\n### Fish School/Sakana\n- An all-new innovative approaches to machine learning that leverage the power of the Transformer model architecture. These systems are designed to mimic the behavior of a school of fish, where each fish represents an individual Transformer model. The goal is to optimize the performance of the entire school by learning from the best-performing fish.\n\n```python\nimport torch\nfrom swarms_torch.fish_school import Fish, FishSchool\n\n# Create random source and target sequences\nsrc = torch.randn(10, 32, 512)\ntgt = torch.randn(10, 32, 512)\n\n# Create random labels\nlabels = torch.randint(0, 512, (10, 32))\n\n# Create a fish and train it on the random data\nfish = Fish(512, 8, 6)\nfish.train(src, tgt, labels)\nprint(fish.food)  # Print the fish's food\n\n# Create a fish school and optimize it on the random data\nschool = FishSchool(10, 512, 8, 6, 100)\nschool.forward(src, tgt, labels)\nprint(school.fish[0].food)  # Print the first fish's food\n\n```\n\n### Swarmalators\n```python\nfrom swarms_torch import visualize_swarmalators, simulate_swarmalators\n\n# Init for Swarmalator\n# Example usage:\nN = 100\nJ, alpha, beta, gamma, epsilon_a, epsilon_r, R = [0.1] * 7\nD = 3  # Ensure D is an integer\nxi, sigma_i = simulate_swarmalators(\n    N, J, alpha, beta, gamma, epsilon_a, epsilon_r, R, D\n)\n\n\n# Call the visualization function\nvisualize_swarmalators(xi)\n```\n\n### Mixture of Mambas\n- An 100% novel implementation of a swarm of MixtureOfMambas.\n- Various fusion methods through averages, weighted_aggegrate, and more to come like a gating mechanism or other various methods.\n- fusion methods: average, weighted, absmax, weighted_softmax, or your own custom function\n\n```python\nimport torch\nfrom swarms_torch import MixtureOfMambas\n\n# Create a 3D tensor for text\nx = torch.rand(1, 512, 512)\n\n# Create an instance of the MixtureOfMambas model\nmodel = MixtureOfMambas(\n    num_mambas=2,            # Number of Mambas in the model\n    dim=512,                 # Dimension of the input tensor\n    d_state=1024,            # Dimension of the hidden state\n    depth=4,                 # Number of layers in the model\n    d_conv=1024,             # Dimension of the convolutional layers\n    expand=4,                # Expansion factor for the model\n    fusion_method=\"absmax\",  # Fusion method for combining Mambas' outputs\n    custom_fusion_func=None  # Custom fusion function (if any)\n)\n\n# Pass the input tensor through the model and print the output shape\nprint(model(x).shape)\n\n```\n\n\n### `SwitchMoE`\n\n```python\nimport torch \nfrom swarms_torch import SwitchMoE\n\n# Example usage:\ninput_dim = 768  # Dimension of input tokens\nhidden_dim = 2048  # Hidden dimension of experts\noutput_dim = 768  # Output dimension, should match input dimension for residual connection\nnum_experts = 16  # Number of experts\n\nmoe_layer = SwitchMoE(\n    dim=input_dim,\n    hidden_dim=hidden_dim,\n    output_dim=output_dim,\n    num_experts=num_experts,\n    use_aux_loss=False,\n)\n\n# Create a sample input tensor (batch_size, seq_len, input_dim)\nx = torch.rand(32, 128, input_dim)\n\n# Forward pass through the MoE layer with auxiliary loss computation\noutput, auxiliary_loss = moe_layer(x)\n\n# Now, 'output' contains the MoE output, and 'auxiliary_loss' contains the load balancing loss.\n# This auxiliary loss should be added to the main loss function during training.\n\nprint(output)\nprint(auxiliary_loss)\n```\n### SimpleMoE\nA very simple Mixture of Experts with softmax as a gating mechanism.\n\n```python\nimport torch \nfrom swarms_torch import SimpleMoE\n\n# Example usage:\ninput_dim = 512  # Dimension of input tokens\nhidden_dim = 1024  # Hidden dimension of experts\noutput_dim = 512  # Output dimension, should match input dimension for residual connection\nnum_experts = 4  # Number of experts\n\nmoe = SimpleMoE(input_dim, hidden_dim, output_dim, num_experts)\n\n# Create a sample input tensor (batch_size, seq_len, input_dim)\nx = torch.rand(10, 16, input_dim)\n\n# Forward pass through the MoE layer\noutput = moe(x)\nprint(output)\n```\n\n\n# Documentation\n- [Click here for documentation](https://swarmstorch.readthedocs.io/en/latest/swarms/)\n\n# Examples\n- There are various scripts in the playground folder with various examples for each swarm, like ant colony and fish school and spiral optimization.\n\n\n## \ud83e\udef6 Contributions:\n\nThe easiest way to contribute is to pick any issue with the `good first issue` tag \ud83d\udcaa. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)\n\nSwarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms-pytorch/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/9) to participate in Roadmap discussions!\n\n<a href=\"https://github.com/kyegomez/swarms-pytorch/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=kyegomez/swarms-pytorch\" />\n</a>\n\n----\n\n## Community\n\nJoin our growing community around the world, for real-time support, ideas, and discussions on Swarms \ud83d\ude0a \n\n- View our official [Blog](https://swarms.apac.ai)\n- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)\n- Follow us on [Twitter](https://twitter.com/kyegomez)\n- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)\n- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)\n- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)\n- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)\n\n---\n\n## Discovery Call\nBook a discovery call to learn how Swarms can lower your operating costs by 40% with swarms of autonomous agents in lightspeed. [Click here to book a time that works for you!](https://calendly.com/swarm-corp/30min?month=2023-11)\n\n## Accelerate Backlog\nHelp us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)\n\n<a href=\"https://polar.sh/kyegomez\"><img src=\"https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez\" /></a>\n\n# License\nMIT",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "swarms-torch - Pytorch",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/swarms-pytorch",
        "Repository": "https://github.com/kyegomez/swarms-pytorch"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "615df0c03a3aae5626921279864ac36a66aec1b0bbb1d8417c4b50f8608b6d6c",
                "md5": "57e3e3f130986dcd611a5cf791f72839",
                "sha256": "d8ab321e242a87ede35f58f4d913611a4ec1a75a06d60452c0bad32a331364fb"
            },
            "downloads": -1,
            "filename": "swarms_torch-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "57e3e3f130986dcd611a5cf791f72839",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 37152,
            "upload_time": "2024-01-21T23:43:13",
            "upload_time_iso_8601": "2024-01-21T23:43:13.248234Z",
            "url": "https://files.pythonhosted.org/packages/61/5d/f0c03a3aae5626921279864ac36a66aec1b0bbb1d8417c4b50f8608b6d6c/swarms_torch-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "37fe6bddb210720187eb05efaa3a3bef66e86893517b53834e31f909ce9cffd9",
                "md5": "b334257797c429ff5c9555f99d052ec2",
                "sha256": "cbeb3b74a1dd603b32238c517c949ef9057631c0596912a3b0ab2ce69ff7bca7"
            },
            "downloads": -1,
            "filename": "swarms_torch-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "b334257797c429ff5c9555f99d052ec2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 32155,
            "upload_time": "2024-01-21T23:43:15",
            "upload_time_iso_8601": "2024-01-21T23:43:15.054138Z",
            "url": "https://files.pythonhosted.org/packages/37/fe/6bddb210720187eb05efaa3a3bef66e86893517b53834e31f909ce9cffd9/swarms_torch-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-21 23:43:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "swarms-pytorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    "==",
                    "2.1.2"
                ]
            ]
        },
        {
            "name": "einops",
            "specs": [
                [
                    "==",
                    "0.7.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "2.1.4"
                ]
            ]
        },
        {
            "name": "zetascale",
            "specs": [
                [
                    "==",
                    "1.4.4"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "==",
                    "7.4.2"
                ]
            ]
        },
        {
            "name": "mkdocs",
            "specs": []
        },
        {
            "name": "mkdocs-material",
            "specs": []
        },
        {
            "name": "mkdocs-glightbox",
            "specs": []
        }
    ],
    "lcname": "swarms-torch"
}
        
Elapsed time: 0.18716s