[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)
# Novel Swarm Intelligence Model Architectures
[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)
Swarms in Torch exclusivley hosts a vast array of 100% novel swarming models. Our purpose for this repo is to create, optimize, and train novel foundation models that outperform the status quo of model architectures such as the Transformer and SSM model architectures. We provide implementations of various novel models like PSO with transformers as particles, ant colony with transformers as ants, a basic NN with transformers as neurons, Mixture of Mambas and many more. If you would like to help contribute to the future of AI model architecture's please join Agora, the open source lab here. And, if you have any idea's please submit them as issues and notify me.
## Installation
```bash
pip3 install swarms-torch
```
# Usage
### ParticleSwarmOptimization
```python
from swarms_torch import ParticleSwarmOptimization
pso = ParticleSwarmOptimization(goal="Attention is all you need", n_particles=100)
pso.optimize(iterations=1000)
```
### Ant Colony Optimization
```python
from swarms_torch.ant_colony_swarm import AntColonyOptimization
# Usage:
goal_string = "Hello ACO"
aco = AntColonyOptimization(goal_string, num_iterations=1000)
best_solution = aco.optimize()
print("Best Matched String:", best_solution)
```
### Neural Network with Transformers as synapases
```python
import torch
from swarms_torch.nnt import NNTransformer
x = torch.randn(1, 10)
network = NNTransformer(
neuron_count = 5,
num_states = 10,
input_dim = 10,
output_dim = 10,
nhead = 2,
)
output = network(x)
print(output)
```
### CellularSwarm
a Cellular Neural Net with transformers as cells, time simulation, and a local neighboorhood!
```python
from swarms_torch import CellularSwarm
x = torch.randn(10, 32, 512) # sequence length of 10, batch size of 32, embedding size of 512
model = CellularSwarm(cell_count=5, input_dim=512, nhead=8)
output = model(x)
```
### Fish School/Sakana
- An all-new innovative approaches to machine learning that leverage the power of the Transformer model architecture. These systems are designed to mimic the behavior of a school of fish, where each fish represents an individual Transformer model. The goal is to optimize the performance of the entire school by learning from the best-performing fish.
```python
import torch
from swarms_torch.fish_school import Fish, FishSchool
# Create random source and target sequences
src = torch.randn(10, 32, 512)
tgt = torch.randn(10, 32, 512)
# Create random labels
labels = torch.randint(0, 512, (10, 32))
# Create a fish and train it on the random data
fish = Fish(512, 8, 6)
fish.train(src, tgt, labels)
print(fish.food) # Print the fish's food
# Create a fish school and optimize it on the random data
school = FishSchool(10, 512, 8, 6, 100)
school.forward(src, tgt, labels)
print(school.fish[0].food) # Print the first fish's food
```
### Swarmalators
```python
from swarms_torch import visualize_swarmalators, simulate_swarmalators
# Init for Swarmalator
# Example usage:
N = 100
J, alpha, beta, gamma, epsilon_a, epsilon_r, R = [0.1] * 7
D = 3 # Ensure D is an integer
xi, sigma_i = simulate_swarmalators(
N, J, alpha, beta, gamma, epsilon_a, epsilon_r, R, D
)
# Call the visualization function
visualize_swarmalators(xi)
```
### Mixture of Mambas
- An 100% novel implementation of a swarm of MixtureOfMambas.
- Various fusion methods through averages, weighted_aggegrate, and more to come like a gating mechanism or other various methods.
- fusion methods: average, weighted, absmax, weighted_softmax, or your own custom function
```python
import torch
from swarms_torch import MixtureOfMambas
# Create a 3D tensor for text
x = torch.rand(1, 512, 512)
# Create an instance of the MixtureOfMambas model
model = MixtureOfMambas(
num_mambas=2, # Number of Mambas in the model
dim=512, # Dimension of the input tensor
d_state=1024, # Dimension of the hidden state
depth=4, # Number of layers in the model
d_conv=1024, # Dimension of the convolutional layers
expand=4, # Expansion factor for the model
fusion_method="absmax", # Fusion method for combining Mambas' outputs
custom_fusion_func=None # Custom fusion function (if any)
)
# Pass the input tensor through the model and print the output shape
print(model(x).shape)
```
### `SwitchMoE`
```python
import torch
from swarms_torch import SwitchMoE
# Example usage:
input_dim = 768 # Dimension of input tokens
hidden_dim = 2048 # Hidden dimension of experts
output_dim = 768 # Output dimension, should match input dimension for residual connection
num_experts = 16 # Number of experts
moe_layer = SwitchMoE(
dim=input_dim,
hidden_dim=hidden_dim,
output_dim=output_dim,
num_experts=num_experts,
use_aux_loss=False,
)
# Create a sample input tensor (batch_size, seq_len, input_dim)
x = torch.rand(32, 128, input_dim)
# Forward pass through the MoE layer with auxiliary loss computation
output, auxiliary_loss = moe_layer(x)
# Now, 'output' contains the MoE output, and 'auxiliary_loss' contains the load balancing loss.
# This auxiliary loss should be added to the main loss function during training.
print(output)
print(auxiliary_loss)
```
### SimpleMoE
A very simple Mixture of Experts with softmax as a gating mechanism.
```python
import torch
from swarms_torch import SimpleMoE
# Example usage:
input_dim = 512 # Dimension of input tokens
hidden_dim = 1024 # Hidden dimension of experts
output_dim = 512 # Output dimension, should match input dimension for residual connection
num_experts = 4 # Number of experts
moe = SimpleMoE(input_dim, hidden_dim, output_dim, num_experts)
# Create a sample input tensor (batch_size, seq_len, input_dim)
x = torch.rand(10, 16, input_dim)
# Forward pass through the MoE layer
output = moe(x)
print(output)
```
### Firefly
Exploration into the Firefly algorithm (a generalized version of particle swarm optimization) in Pytorch. In particular interested in hybrid <a href="https://academic.oup.com/jcde/article/9/2/706/6566441">firefly + genetic algorithms</a>, or ones that are <a href="https://www.sciencedirect.com/science/article/abs/pii/S0957417423005298">gender-based</a>. This code was adapted from lucidrains.
```python
from swarms_torch.firefly import FireflyOptimizer
from torch import Tensor
def rosenbrock(x: Tensor) -> Tensor:
return (
100 * (x[..., 1:] - x[..., :-1] ** 2) ** 2 + (1 - x[..., :-1]) ** 2
).sum(dim=-1)
if __name__ == "__main__":
optimizer = FireflyOptimizer(cost_function=rosenbrock)
optimizer.optimize()
best_solution = optimizer.get_best_solution()
print(f"Best solution: {best_solution}")
```
# Documentation
- [Click here for documentation](https://swarmstorch.readthedocs.io/en/latest/swarms/)
# Examples
- There are various scripts in the playground folder with various examples for each swarm, like ant colony and fish school and spiral optimization.
## 🫶 Contributions:
The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)
Swarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms-pytorch/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/9) to participate in Roadmap discussions!
<a href="https://github.com/kyegomez/swarms-pytorch/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-pytorch" />
</a>
----
## Community
Join our growing community around the world, for real-time support, ideas, and discussions on Swarms 😊
- View our official [Blog](https://swarms.apac.ai)
- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)
- Follow us on [Twitter](https://twitter.com/kyegomez)
- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)
- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)
- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)
## Accelerate Backlog
Help us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)
<a href="https://polar.sh/kyegomez"><img src="https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez" /></a>
# License
MIT
## Citations
```bibtex
@article{Yang2018WhyTF,
title = {Why the Firefly Algorithm Works?},
author = {Xin-She Yang and Xingshi He},
journal = {ArXiv},
year = {2018},
volume = {abs/1806.01632},
url = {https://api.semanticscholar.org/CorpusID:46940737}
}
```
```bibtex
@article{article,
author = {El-Shorbagy, M. and Elrefaey, Adel},
year = {2022},
month = {04},
pages = {706-730},
title = {A hybrid genetic-firefly algorithm for engineering design problems},
volume = {Journal of Computational Design and Engineering, Volume 9},
journal = {Journal of Computational Design and Engineering},
doi = {10.1093/jcde/qwac013}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/kyegomez/swarms-pytorch",
"name": "swarms-torch",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.6",
"maintainer_email": null,
"keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
"author": "Kye Gomez",
"author_email": "kye@apac.ai",
"download_url": "https://files.pythonhosted.org/packages/d2/d2/76a3ecf5e9d20788af361916c2602c0f02c05b6fcd22f8cf0a7394689a6c/swarms_torch-0.2.2.tar.gz",
"platform": null,
"description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# Novel Swarm Intelligence Model Architectures\n\n[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)\n\n\nSwarms in Torch exclusivley hosts a vast array of 100% novel swarming models. Our purpose for this repo is to create, optimize, and train novel foundation models that outperform the status quo of model architectures such as the Transformer and SSM model architectures. We provide implementations of various novel models like PSO with transformers as particles, ant colony with transformers as ants, a basic NN with transformers as neurons, Mixture of Mambas and many more. If you would like to help contribute to the future of AI model architecture's please join Agora, the open source lab here. And, if you have any idea's please submit them as issues and notify me.\n\n\n## Installation\n\n```bash\npip3 install swarms-torch\n```\n\n# Usage\n\n### ParticleSwarmOptimization\n\n```python\nfrom swarms_torch import ParticleSwarmOptimization\n\n\npso = ParticleSwarmOptimization(goal=\"Attention is all you need\", n_particles=100)\n\npso.optimize(iterations=1000)\n```\n\n### Ant Colony Optimization\n```python\nfrom swarms_torch.ant_colony_swarm import AntColonyOptimization\n\n# Usage:\ngoal_string = \"Hello ACO\"\naco = AntColonyOptimization(goal_string, num_iterations=1000)\nbest_solution = aco.optimize()\nprint(\"Best Matched String:\", best_solution)\n\n```\n\n### Neural Network with Transformers as synapases\n```python\nimport torch\nfrom swarms_torch.nnt import NNTransformer\n\nx = torch.randn(1, 10)\n\nnetwork = NNTransformer(\n neuron_count = 5, \n num_states = 10,\n input_dim = 10,\n output_dim = 10,\n nhead = 2,\n)\noutput = network(x)\nprint(output)\n```\n\n### CellularSwarm\na Cellular Neural Net with transformers as cells, time simulation, and a local neighboorhood!\n\n```python\nfrom swarms_torch import CellularSwarm \n\nx = torch.randn(10, 32, 512) # sequence length of 10, batch size of 32, embedding size of 512\nmodel = CellularSwarm(cell_count=5, input_dim=512, nhead=8)\noutput = model(x)\n\n```\n### Fish School/Sakana\n- An all-new innovative approaches to machine learning that leverage the power of the Transformer model architecture. These systems are designed to mimic the behavior of a school of fish, where each fish represents an individual Transformer model. The goal is to optimize the performance of the entire school by learning from the best-performing fish.\n\n```python\nimport torch\nfrom swarms_torch.fish_school import Fish, FishSchool\n\n# Create random source and target sequences\nsrc = torch.randn(10, 32, 512)\ntgt = torch.randn(10, 32, 512)\n\n# Create random labels\nlabels = torch.randint(0, 512, (10, 32))\n\n# Create a fish and train it on the random data\nfish = Fish(512, 8, 6)\nfish.train(src, tgt, labels)\nprint(fish.food) # Print the fish's food\n\n# Create a fish school and optimize it on the random data\nschool = FishSchool(10, 512, 8, 6, 100)\nschool.forward(src, tgt, labels)\nprint(school.fish[0].food) # Print the first fish's food\n\n```\n\n### Swarmalators\n```python\nfrom swarms_torch import visualize_swarmalators, simulate_swarmalators\n\n# Init for Swarmalator\n# Example usage:\nN = 100\nJ, alpha, beta, gamma, epsilon_a, epsilon_r, R = [0.1] * 7\nD = 3 # Ensure D is an integer\nxi, sigma_i = simulate_swarmalators(\n N, J, alpha, beta, gamma, epsilon_a, epsilon_r, R, D\n)\n\n\n# Call the visualization function\nvisualize_swarmalators(xi)\n```\n\n### Mixture of Mambas\n- An 100% novel implementation of a swarm of MixtureOfMambas.\n- Various fusion methods through averages, weighted_aggegrate, and more to come like a gating mechanism or other various methods.\n- fusion methods: average, weighted, absmax, weighted_softmax, or your own custom function\n\n```python\nimport torch\nfrom swarms_torch import MixtureOfMambas\n\n# Create a 3D tensor for text\nx = torch.rand(1, 512, 512)\n\n# Create an instance of the MixtureOfMambas model\nmodel = MixtureOfMambas(\n num_mambas=2, # Number of Mambas in the model\n dim=512, # Dimension of the input tensor\n d_state=1024, # Dimension of the hidden state\n depth=4, # Number of layers in the model\n d_conv=1024, # Dimension of the convolutional layers\n expand=4, # Expansion factor for the model\n fusion_method=\"absmax\", # Fusion method for combining Mambas' outputs\n custom_fusion_func=None # Custom fusion function (if any)\n)\n\n# Pass the input tensor through the model and print the output shape\nprint(model(x).shape)\n\n```\n\n\n### `SwitchMoE`\n\n```python\nimport torch \nfrom swarms_torch import SwitchMoE\n\n# Example usage:\ninput_dim = 768 # Dimension of input tokens\nhidden_dim = 2048 # Hidden dimension of experts\noutput_dim = 768 # Output dimension, should match input dimension for residual connection\nnum_experts = 16 # Number of experts\n\nmoe_layer = SwitchMoE(\n dim=input_dim,\n hidden_dim=hidden_dim,\n output_dim=output_dim,\n num_experts=num_experts,\n use_aux_loss=False,\n)\n\n# Create a sample input tensor (batch_size, seq_len, input_dim)\nx = torch.rand(32, 128, input_dim)\n\n# Forward pass through the MoE layer with auxiliary loss computation\noutput, auxiliary_loss = moe_layer(x)\n\n# Now, 'output' contains the MoE output, and 'auxiliary_loss' contains the load balancing loss.\n# This auxiliary loss should be added to the main loss function during training.\n\nprint(output)\nprint(auxiliary_loss)\n```\n### SimpleMoE\nA very simple Mixture of Experts with softmax as a gating mechanism.\n\n```python\nimport torch \nfrom swarms_torch import SimpleMoE\n\n# Example usage:\ninput_dim = 512 # Dimension of input tokens\nhidden_dim = 1024 # Hidden dimension of experts\noutput_dim = 512 # Output dimension, should match input dimension for residual connection\nnum_experts = 4 # Number of experts\n\nmoe = SimpleMoE(input_dim, hidden_dim, output_dim, num_experts)\n\n# Create a sample input tensor (batch_size, seq_len, input_dim)\nx = torch.rand(10, 16, input_dim)\n\n# Forward pass through the MoE layer\noutput = moe(x)\nprint(output)\n```\n\n### Firefly\n\nExploration into the Firefly algorithm (a generalized version of particle swarm optimization) in Pytorch. In particular interested in hybrid <a href=\"https://academic.oup.com/jcde/article/9/2/706/6566441\">firefly + genetic algorithms</a>, or ones that are <a href=\"https://www.sciencedirect.com/science/article/abs/pii/S0957417423005298\">gender-based</a>. This code was adapted from lucidrains.\n\n```python\nfrom swarms_torch.firefly import FireflyOptimizer\nfrom torch import Tensor\n\n\ndef rosenbrock(x: Tensor) -> Tensor:\n return (\n 100 * (x[..., 1:] - x[..., :-1] ** 2) ** 2 + (1 - x[..., :-1]) ** 2\n ).sum(dim=-1)\n\n\nif __name__ == \"__main__\":\n optimizer = FireflyOptimizer(cost_function=rosenbrock)\n optimizer.optimize()\n best_solution = optimizer.get_best_solution()\n print(f\"Best solution: {best_solution}\")\n\n```\n\n\n\n\n# Documentation\n- [Click here for documentation](https://swarmstorch.readthedocs.io/en/latest/swarms/)\n\n# Examples\n- There are various scripts in the playground folder with various examples for each swarm, like ant colony and fish school and spiral optimization.\n\n\n## \ud83e\udef6 Contributions:\n\nThe easiest way to contribute is to pick any issue with the `good first issue` tag \ud83d\udcaa. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)\n\nSwarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms-pytorch/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/9) to participate in Roadmap discussions!\n\n<a href=\"https://github.com/kyegomez/swarms-pytorch/graphs/contributors\">\n <img src=\"https://contrib.rocks/image?repo=kyegomez/swarms-pytorch\" />\n</a>\n\n----\n\n## Community\n\nJoin our growing community around the world, for real-time support, ideas, and discussions on Swarms \ud83d\ude0a \n\n- View our official [Blog](https://swarms.apac.ai)\n- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)\n- Follow us on [Twitter](https://twitter.com/kyegomez)\n- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)\n- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)\n- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)\n- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)\n\n## Accelerate Backlog\nHelp us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)\n\n<a href=\"https://polar.sh/kyegomez\"><img src=\"https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez\" /></a>\n\n# License\nMIT\n\n\n## Citations\n\n```bibtex\n@article{Yang2018WhyTF,\n title = {Why the Firefly Algorithm Works?},\n author = {Xin-She Yang and Xingshi He},\n journal = {ArXiv},\n year = {2018},\n volume = {abs/1806.01632},\n url = {https://api.semanticscholar.org/CorpusID:46940737}\n}\n```\n\n```bibtex\n@article{article,\n author = {El-Shorbagy, M. and Elrefaey, Adel},\n year = {2022},\n month = {04},\n pages = {706-730},\n title = {A hybrid genetic-firefly algorithm for engineering design problems},\n volume = {Journal of Computational Design and Engineering, Volume 9},\n journal = {Journal of Computational Design and Engineering},\n doi = {10.1093/jcde/qwac013}\n}\n```",
"bugtrack_url": null,
"license": "MIT",
"summary": "swarms-torch - Pytorch",
"version": "0.2.2",
"project_urls": {
"Documentation": "https://github.com/kyegomez/swarms-pytorch",
"Homepage": "https://github.com/kyegomez/swarms-pytorch",
"Repository": "https://github.com/kyegomez/swarms-pytorch"
},
"split_keywords": [
"artificial intelligence",
" deep learning",
" optimizers",
" prompt engineering"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e7aba60bc701aa0fdf8aa2c3a5618d7643410b61d038518b7601e47b00adb74a",
"md5": "d6bc2dad20c8e24f7cb2fa826b9aa491",
"sha256": "b91a925e0ca4367ab92f92e412d3007a2c999748e1b0c0615ba30753b3d94017"
},
"downloads": -1,
"filename": "swarms_torch-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d6bc2dad20c8e24f7cb2fa826b9aa491",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.6",
"size": 42610,
"upload_time": "2024-09-14T17:31:32",
"upload_time_iso_8601": "2024-09-14T17:31:32.923515Z",
"url": "https://files.pythonhosted.org/packages/e7/ab/a60bc701aa0fdf8aa2c3a5618d7643410b61d038518b7601e47b00adb74a/swarms_torch-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d2d276a3ecf5e9d20788af361916c2602c0f02c05b6fcd22f8cf0a7394689a6c",
"md5": "7316b58bb26b97fec0b7a3bafb9aa019",
"sha256": "f7892c60c546c846944973c6675190c7d790379d749f62c663915a76e2f6f165"
},
"downloads": -1,
"filename": "swarms_torch-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "7316b58bb26b97fec0b7a3bafb9aa019",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.6",
"size": 36378,
"upload_time": "2024-09-14T17:31:34",
"upload_time_iso_8601": "2024-09-14T17:31:34.114431Z",
"url": "https://files.pythonhosted.org/packages/d2/d2/76a3ecf5e9d20788af361916c2602c0f02c05b6fcd22f8cf0a7394689a6c/swarms_torch-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-14 17:31:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kyegomez",
"github_project": "swarms-pytorch",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "python",
"specs": []
},
{
"name": "torch",
"specs": []
},
{
"name": "einops",
"specs": []
},
{
"name": "zetascale",
"specs": []
},
{
"name": "pytest",
"specs": []
},
{
"name": "torchvision",
"specs": []
},
{
"name": "loguru",
"specs": []
},
{
"name": "einx",
"specs": []
}
],
"lcname": "swarms-torch"
}