# Swarm Models
[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)
Swarm Models provides a unified, secure, and highly scalable interface for interacting with multiple LLM and multi-modal APIs across different providers. It is built to streamline your API integrations, ensuring production-grade reliability and robust performance.
## **Key Features**:
- **Multi-Provider Support**: Integrate seamlessly with APIs from OpenAI, Anthropic, Azure, and more.
- **Enterprise-Grade Security**: Built-in security protocols to protect your API keys and sensitive data, ensuring compliance with industry standards.
- **Lightning-Fast Performance**: Optimized for low-latency and high-throughput, Swarm Models delivers blazing-fast API responses, suitable for real-time applications.
- **Ease of Use**: Simplified API interaction with intuitive `.run(task)` and `__call__` methods, making integration effortless.
- **Scalability for All Use Cases**: Whether it's a small script or a massive enterprise-scale application, Swarm Models scales effortlessly.
- **Production-Grade Reliability**: Tested and proven in enterprise environments, ensuring consistent uptime and failover capabilities.
---
## **Onboarding**
Swarm Models simplifies the way you interact with different APIs by providing a unified interface for all models.
### **1. Install Swarm Models**
```bash
$ pip3 install -U swarm-models
```
### **2. Set Your Keys**
```bash
OPENAI_API_KEY="your_openai_api_key"
GROQ_API_KEY="your_groq_api_key"
ANTHROPIC_API_KEY="your_anthropic_api_key"
AZURE_OPENAI_API_KEY="your_azure_openai_api_key"
```
### **3. Initialize a Model**
Import the desired model from the package and initialize it with your API key or necessary configuration.
```python
from swarm_models import YourDesiredModel
model = YourDesiredModel(api_key='your_api_key', *args, **kwargs)
```
### **4. Run Your Task**
Use the `.run(task)` method or simply call the model like `model(task)` with your task.
```python
task = "Define your task here"
result = model.run(task)
# Or equivalently
#result = model(task)
```
### **5. Enjoy the Results**
```python
print(result)
```
---
## **Full Code Example**
```python
from swarm_models import OpenAIChat
import os
# Get the OpenAI API key from the environment variable
api_key = os.getenv("OPENAI_API_KEY")
# Create an instance of the OpenAIChat class
model = OpenAIChat(openai_api_key=api_key, model_name="gpt-4o-mini")
# Query the model with a question
out = model(
"What is the best state to register a business in the US for the least amount of taxes?"
)
# Print the model's response
print(out)
```
---
## `TogetherLLM` Documentation
The `TogetherLLM` class is designed to simplify the interaction with Together's LLM models. It provides a straightforward way to run tasks on these models, including support for concurrent and batch processing.
### Initialization
To use `TogetherLLM`, you need to initialize it with your API key, the name of the model you want to use, and optionally, a system prompt. The system prompt is used to provide context to the model for the tasks you will run.
Here's an example of how to initialize `TogetherLLM`:
```python
import os
from swarm_models import TogetherLLM
model_runner = TogetherLLM(
api_key=os.environ.get("TOGETHER_API_KEY"),
model_name="meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
system_prompt="You're Larry fink",
)
```
### Running Tasks
Once initialized, you can run tasks on the model using the `run` method. This method takes a task string as an argument and returns the response from the model.
Here's an example of running a single task:
```python
task = "How do we allocate capital efficiently in your opinion Larry?"
response = model_runner.run(task)
print(response)
```
### Running Multiple Tasks Concurrently
`TogetherLLM` also supports running multiple tasks concurrently using the `run_concurrently` method. This method takes a list of task strings and returns a list of responses from the model.
Here's an example of running multiple tasks concurrently:
```python
tasks = [
"What are the top-performing mutual funds in the last quarter?",
"How do I evaluate the risk of a mutual fund?",
"What are the fees associated with investing in a mutual fund?",
"Can you recommend a mutual fund for a beginner investor?",
"How do I diversify my portfolio with mutual funds?",
]
responses = model_runner.run_concurrently(tasks)
for response in responses:
print(response)
```
## **Enterprise-Grade Features**
1. **Security**: API keys and user data are handled with utmost care, utilizing encryption and best security practices to protect your sensitive information.
2. **Production Reliability**: Swarm Models has undergone rigorous testing to ensure that it can handle high traffic and remains resilient in enterprise-grade environments.
3. **Fail-Safe Mechanisms**: Built-in failover handling to ensure uninterrupted service even under heavy load or network issues.
4. **Unified API**: No more dealing with multiple SDKs or libraries. Swarm Models standardizes your interactions across providers like OpenAI, Anthropic, Azure, and more, so you can focus on what matters.
---
## **Available Models**
| Model Name | Description |
|---------------------------|-------------------------------------------------------|
| `OpenAIChat` | Chat model for OpenAI's GPT-3 and GPT-4 APIs. |
| `Anthropic` | Model for interacting with Anthropic's APIs. |
| `AzureOpenAI` | Azure's implementation of OpenAI's models. |
| `Dalle3` | Model for generating images from text prompts. |
| `NvidiaLlama31B` | Llama model for causal language generation. |
| `Fuyu` | Multi-modal model for image and text processing. |
| `Gemini` | Multi-modal model for vision and language tasks. |
| `Vilt` | Vision-and-Language Transformer for question answering.|
| `TogetherLLM` | Model for collaborative language tasks. |
| `FireWorksAI` | Model for generating creative content. |
| `ReplicateChat` | Chat model for replicating conversations. |
| `HuggingfaceLLM` | Interface for Hugging Face models. |
| `CogVLMMultiModal` | Multi-modal model for vision and language tasks. |
| `LayoutLMDocumentQA` | Model for document question answering. |
| `GPT4VisionAPI` | Model for analyzing images with GPT-4 capabilities. |
| `LlamaForCausalLM` | Causal language model from the Llama family. |
| `GroundedSAMTwo` | Analyzes and track objects in images. GPU Only |
---
## **Support & Contributions**
- **Documentation**: Comprehensive guides, API references, and best practices are available in our official [Documentation](https://docs.swarms.world).
- **GitHub**: Explore the code, report issues, and contribute to the project via our [GitHub repository](https://github.com/The-Swarm-Corporation/swarm-models).
---
## **License**
Swarm Models is released under the [MIT License](https://github.com/The-Swarm-Corporation/swarm-models/LICENSE).
---
Raw data
{
"_id": null,
"home_page": "https://github.com/The-Swarm-Corporation/swarm-models",
"name": "swarm-models",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
"author": "Kye Gomez",
"author_email": "kye@apac.ai",
"download_url": "https://files.pythonhosted.org/packages/1b/1d/9eb72823739834f27af2acb7d2291b201c03f00f1645aa20ec9182f365c8/swarm_models-0.1.1.tar.gz",
"platform": null,
"description": "\n# Swarm Models\n\n[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)\n\nSwarm Models provides a unified, secure, and highly scalable interface for interacting with multiple LLM and multi-modal APIs across different providers. It is built to streamline your API integrations, ensuring production-grade reliability and robust performance.\n\n## **Key Features**:\n\n- **Multi-Provider Support**: Integrate seamlessly with APIs from OpenAI, Anthropic, Azure, and more.\n \n- **Enterprise-Grade Security**: Built-in security protocols to protect your API keys and sensitive data, ensuring compliance with industry standards.\n\n- **Lightning-Fast Performance**: Optimized for low-latency and high-throughput, Swarm Models delivers blazing-fast API responses, suitable for real-time applications.\n\n- **Ease of Use**: Simplified API interaction with intuitive `.run(task)` and `__call__` methods, making integration effortless.\n\n- **Scalability for All Use Cases**: Whether it's a small script or a massive enterprise-scale application, Swarm Models scales effortlessly.\n\n- **Production-Grade Reliability**: Tested and proven in enterprise environments, ensuring consistent uptime and failover capabilities.\n\n---\n\n\n## **Onboarding**\n\nSwarm Models simplifies the way you interact with different APIs by providing a unified interface for all models.\n\n### **1. Install Swarm Models**\n\n```bash\n$ pip3 install -U swarm-models\n```\n\n### **2. Set Your Keys**\n\n```bash\nOPENAI_API_KEY=\"your_openai_api_key\"\nGROQ_API_KEY=\"your_groq_api_key\"\nANTHROPIC_API_KEY=\"your_anthropic_api_key\"\nAZURE_OPENAI_API_KEY=\"your_azure_openai_api_key\"\n```\n\n### **3. Initialize a Model**\n\nImport the desired model from the package and initialize it with your API key or necessary configuration.\n\n```python\nfrom swarm_models import YourDesiredModel\n\nmodel = YourDesiredModel(api_key='your_api_key', *args, **kwargs)\n```\n\n### **4. Run Your Task**\n\nUse the `.run(task)` method or simply call the model like `model(task)` with your task.\n\n```python\ntask = \"Define your task here\"\nresult = model.run(task)\n\n# Or equivalently\n#result = model(task)\n```\n\n### **5. Enjoy the Results**\n\n```python\nprint(result)\n```\n\n---\n\n## **Full Code Example**\n\n```python\nfrom swarm_models import OpenAIChat\nimport os\n\n# Get the OpenAI API key from the environment variable\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(openai_api_key=api_key, model_name=\"gpt-4o-mini\")\n\n# Query the model with a question\nout = model(\n \"What is the best state to register a business in the US for the least amount of taxes?\"\n)\n\n# Print the model's response\nprint(out)\n```\n\n---\n\n## `TogetherLLM` Documentation\n\nThe `TogetherLLM` class is designed to simplify the interaction with Together's LLM models. It provides a straightforward way to run tasks on these models, including support for concurrent and batch processing.\n\n### Initialization\n\nTo use `TogetherLLM`, you need to initialize it with your API key, the name of the model you want to use, and optionally, a system prompt. The system prompt is used to provide context to the model for the tasks you will run.\n\nHere's an example of how to initialize `TogetherLLM`:\n```python\nimport os\nfrom swarm_models import TogetherLLM\n\nmodel_runner = TogetherLLM(\n api_key=os.environ.get(\"TOGETHER_API_KEY\"),\n model_name=\"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo\",\n system_prompt=\"You're Larry fink\",\n)\n```\n### Running Tasks\n\nOnce initialized, you can run tasks on the model using the `run` method. This method takes a task string as an argument and returns the response from the model.\n\nHere's an example of running a single task:\n```python\ntask = \"How do we allocate capital efficiently in your opinion Larry?\"\nresponse = model_runner.run(task)\nprint(response)\n```\n### Running Multiple Tasks Concurrently\n\n`TogetherLLM` also supports running multiple tasks concurrently using the `run_concurrently` method. This method takes a list of task strings and returns a list of responses from the model.\n\nHere's an example of running multiple tasks concurrently:\n```python\ntasks = [\n \"What are the top-performing mutual funds in the last quarter?\",\n \"How do I evaluate the risk of a mutual fund?\",\n \"What are the fees associated with investing in a mutual fund?\",\n \"Can you recommend a mutual fund for a beginner investor?\",\n \"How do I diversify my portfolio with mutual funds?\",\n]\nresponses = model_runner.run_concurrently(tasks)\nfor response in responses:\n print(response)\n```\n\n\n## **Enterprise-Grade Features**\n\n1. **Security**: API keys and user data are handled with utmost care, utilizing encryption and best security practices to protect your sensitive information.\n \n2. **Production Reliability**: Swarm Models has undergone rigorous testing to ensure that it can handle high traffic and remains resilient in enterprise-grade environments.\n\n3. **Fail-Safe Mechanisms**: Built-in failover handling to ensure uninterrupted service even under heavy load or network issues.\n\n4. **Unified API**: No more dealing with multiple SDKs or libraries. Swarm Models standardizes your interactions across providers like OpenAI, Anthropic, Azure, and more, so you can focus on what matters.\n\n---\n\n## **Available Models**\n\n| Model Name | Description |\n|---------------------------|-------------------------------------------------------|\n| `OpenAIChat` | Chat model for OpenAI's GPT-3 and GPT-4 APIs. |\n| `Anthropic` | Model for interacting with Anthropic's APIs. |\n| `AzureOpenAI` | Azure's implementation of OpenAI's models. |\n| `Dalle3` | Model for generating images from text prompts. |\n| `NvidiaLlama31B` | Llama model for causal language generation. |\n| `Fuyu` | Multi-modal model for image and text processing. |\n| `Gemini` | Multi-modal model for vision and language tasks. |\n| `Vilt` | Vision-and-Language Transformer for question answering.|\n| `TogetherLLM` | Model for collaborative language tasks. |\n| `FireWorksAI` | Model for generating creative content. |\n| `ReplicateChat` | Chat model for replicating conversations. |\n| `HuggingfaceLLM` | Interface for Hugging Face models. |\n| `CogVLMMultiModal` | Multi-modal model for vision and language tasks. |\n| `LayoutLMDocumentQA` | Model for document question answering. |\n| `GPT4VisionAPI` | Model for analyzing images with GPT-4 capabilities. |\n| `LlamaForCausalLM` | Causal language model from the Llama family. |\n| `GroundedSAMTwo` | Analyzes and track objects in images. GPU Only |\n\n\n\n---\n\n## **Support & Contributions**\n\n- **Documentation**: Comprehensive guides, API references, and best practices are available in our official [Documentation](https://docs.swarms.world).\n- **GitHub**: Explore the code, report issues, and contribute to the project via our [GitHub repository](https://github.com/The-Swarm-Corporation/swarm-models).\n\n---\n\n## **License**\n\nSwarm Models is released under the [MIT License](https://github.com/The-Swarm-Corporation/swarm-models/LICENSE).\n\n---\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Swarm Models - Pytorch",
"version": "0.1.1",
"project_urls": {
"Documentation": "https://github.com/The-Swarm-Corporation/swarm-models",
"Homepage": "https://github.com/The-Swarm-Corporation/swarm-models",
"Repository": "https://github.com/The-Swarm-Corporation/swarm-models"
},
"split_keywords": [
"artificial intelligence",
" deep learning",
" optimizers",
" prompt engineering"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "352e92fa30ffcc1f0b51990c6b4eb7e0d7f8496754b14a91549806150a5ae739",
"md5": "a29be154f40d03ba6807647b80a2312c",
"sha256": "e0babd9af0b3604610ac43c6e82006991512db993fbb48db80b535541ded290d"
},
"downloads": -1,
"filename": "swarm_models-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a29be154f40d03ba6807647b80a2312c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 71170,
"upload_time": "2024-10-12T17:07:13",
"upload_time_iso_8601": "2024-10-12T17:07:13.134728Z",
"url": "https://files.pythonhosted.org/packages/35/2e/92fa30ffcc1f0b51990c6b4eb7e0d7f8496754b14a91549806150a5ae739/swarm_models-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1b1d9eb72823739834f27af2acb7d2291b201c03f00f1645aa20ec9182f365c8",
"md5": "8a2c8c3eec53bd5190522fa941ef693c",
"sha256": "85de593f4ebdb61c82656ad1583cbaf883892431b6558e9a3d4ba50ec34647f4"
},
"downloads": -1,
"filename": "swarm_models-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "8a2c8c3eec53bd5190522fa941ef693c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 55908,
"upload_time": "2024-10-12T17:07:14",
"upload_time_iso_8601": "2024-10-12T17:07:14.616264Z",
"url": "https://files.pythonhosted.org/packages/1b/1d/9eb72823739834f27af2acb7d2291b201c03f00f1645aa20ec9182f365c8/swarm_models-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-12 17:07:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "The-Swarm-Corporation",
"github_project": "swarm-models",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "torch",
"specs": []
},
{
"name": "transformers",
"specs": []
},
{
"name": "diffusers",
"specs": []
},
{
"name": "loguru",
"specs": []
},
{
"name": "pydantic",
"specs": []
},
{
"name": "langchain-community",
"specs": [
[
"==",
"\"0.0.29\""
]
]
},
{
"name": "together",
"specs": []
}
],
"lcname": "swarm-models"
}