# ClusterOps
[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
[![Python Version](https://img.shields.io/badge/python-3.8%2B-brightgreen.svg)](https://python.org)
[![Build Status](https://img.shields.io/github/actions/workflow/status/swarms-team/clusterops/test.yml?branch=master)](https://github.com/swarms-team/clusterops/actions)
[![Coverage Status](https://img.shields.io/codecov/c/github/swarms-team/clusterops)](https://codecov.io/gh/swarms-team/clusterops)
**ClusterOps** is an enterprise-grade Python library developed and maintained by the **Swarms Team** to help you manage and execute agents on specific **CPUs** and **GPUs** across clusters. This tool enables advanced CPU and GPU selection, dynamic task allocation, and resource monitoring, making it ideal for high-performance distributed computing environments.
---
## Features
- **CPU Execution**: Dynamically assign tasks to specific CPU cores.
- **GPU Execution**: Execute tasks on specific GPUs or dynamically select the best available GPU based on memory usage.
- **Fault Tolerance**: Built-in retry logic with exponential backoff for handling transient errors.
- **Resource Monitoring**: Real-time CPU and GPU resource monitoring (e.g., free memory on GPUs).
- **Logging**: Advanced logging configuration with customizable log levels (DEBUG, INFO, ERROR).
- **Scalability**: Supports multi-GPU task execution with Ray for distributed computation.
---
## Installation
```bash
pip3 install -U clusterops
```
---
## Quick Start
The following example demonstrates how to use ClusterOps to run tasks on specific CPUs and GPUs.
```python
from clusterops import (
list_available_cpus,
execute_with_cpu_cores,
list_available_gpus,
execute_on_gpu,
execute_on_multiple_gpus,
)
# Example function to run
def sample_task(n: int) -> int:
return n * n
# List CPUs and execute on CPU 0
cpus = list_available_cpus()
execute_on_cpu(0, sample_task, 10)
# List CPUs and execute using 4 CPU cores
execute_with_cpu_cores(4, sample_task, 10)
# List GPUs and execute on GPU 0
gpus = list_available_gpus()
execute_on_gpu(0, sample_task, 10)
# Execute across multiple GPUs
execute_on_multiple_gpus([0, 1], sample_task, 10)
```
## GPU Scheduler
The GPU Scheduler is a Ray Serve deployment that manages job execution with fault tolerance, job retries, and scaling. It uses the GPUJobExecutor to execute tasks on available GPUs.
See the [GPU Scheduler](/clusterops/gpu_scheduler.py) for more details.
```python
from clusterops import gpu_scheduler
async def sample_task(n: int) -> int:
return n * n
print(gpu_scheduler(sample_task, priority=1, n=10))
```
---
## Configuration
ClusterOps provides configuration through environment variables, making it adaptable for different environments (development, staging, production).
### Environment Variables
- **`LOG_LEVEL`**: Configures logging verbosity. Options: `DEBUG`, `INFO`, `ERROR`. Default is `INFO`.
- **`RETRY_COUNT`**: Number of times to retry a task in case of failure. Default is 3.
- **`RETRY_DELAY`**: Initial delay in seconds before retrying. Default is 1 second.
Set these variables in your environment:
```bash
export LOG_LEVEL=DEBUG
export RETRY_COUNT=5
export RETRY_DELAY=2.0
```
-----
## Docs
---
### `list_available_cpus() -> List[int]`
**Description:**
Lists all available CPU cores on the system.
**Returns:**
- `List[int]`: A list of available CPU core indices.
**Raises:**
- `RuntimeError`: If no CPUs are found.
**Example Usage:**
```python
cpus = list_available_cpus()
print(f"Available CPUs: {cpus}")
```
---
### `select_best_gpu() -> Optional[int]`
**Description:**
Selects the GPU with the most free memory.
**Returns:**
- `Optional[int]`: The GPU ID of the best available GPU, or `None` if no GPUs are available.
**Example Usage:**
```python
best_gpu = select_best_gpu()
print(f"Best GPU ID: {best_gpu}")
```
---
### `execute_on_cpu(cpu_id: int, func: Callable, *args: Any, **kwargs: Any) -> Any`
**Description:**
Executes a function on a specific CPU core.
**Arguments:**
- `cpu_id (int)`: The CPU core to run the function on.
- `func (Callable)`: The function to be executed.
- `*args (Any)`: Positional arguments for the function.
- `**kwargs (Any)`: Keyword arguments for the function.
**Returns:**
- `Any`: The result of the function execution.
**Raises:**
- `ValueError`: If the CPU core specified is invalid.
- `RuntimeError`: If there is an error executing the function on the CPU.
**Example Usage:**
```python
result = execute_on_cpu(0, sample_task, 10)
print(f"Result: {result}")
```
---
### `retry_with_backoff(func: Callable, retries: int = RETRY_COUNT, delay: float = RETRY_DELAY, *args: Any, **kwargs: Any) -> Any`
**Description:**
Retries a function with exponential backoff in case of failure.
**Arguments:**
- `func (Callable)`: The function to execute with retries.
- `retries (int)`: Number of retries. Defaults to `RETRY_COUNT`.
- `delay (float)`: Delay between retries in seconds. Defaults to `RETRY_DELAY`.
- `*args (Any)`: Positional arguments for the function.
- `**kwargs (Any)`: Keyword arguments for the function.
**Returns:**
- `Any`: The result of the function execution.
**Raises:**
- `Exception`: After all retries fail.
**Example Usage:**
```python
result = retry_with_backoff(sample_task, retries=5, delay=2, n=10)
print(f"Result after retries: {result}")
```
---
### `execute_with_cpu_cores(core_count: int, func: Callable, *args: Any, **kwargs: Any) -> Any`
**Description:**
Executes a function using a specified number of CPU cores.
**Arguments:**
- `core_count (int)`: The number of CPU cores to run the function on.
- `func (Callable)`: The function to be executed.
- `*args (Any)`: Positional arguments for the function.
- `**kwargs (Any)`: Keyword arguments for the function.
**Returns:**
- `Any`: The result of the function execution.
**Raises:**
- `ValueError`: If the number of CPU cores specified is invalid or exceeds available cores.
- `RuntimeError`: If there is an error executing the function on the specified CPU cores.
**Example Usage:**
```python
result = execute_with_cpu_cores(4, sample_task, 10)
print(f"Result: {result}")
```
---
### `list_available_gpus() -> List[str]`
**Description:**
Lists all available GPUs on the system.
**Returns:**
- `List[str]`: A list of available GPU names.
**Raises:**
- `RuntimeError`: If no GPUs are found.
**Example Usage:**
```python
gpus = list_available_gpus()
print(f"Available GPUs: {gpus}")
```
---
### `execute_on_gpu(gpu_id: int, func: Callable, *args: Any, **kwargs: Any) -> Any`
**Description:**
Executes a function on a specific GPU using Ray.
**Arguments:**
- `gpu_id (int)`: The GPU to run the function on.
- `func (Callable)`: The function to be executed.
- `*args (Any)`: Positional arguments for the function.
- `**kwargs (Any)`: Keyword arguments for the function.
**Returns:**
- `Any`: The result of the function execution.
**Raises:**
- `ValueError`: If the GPU index is invalid.
- `RuntimeError`: If there is an error executing the function on the GPU.
**Example Usage:**
```python
result = execute_on_gpu(0, sample_task, 10)
print(f"Result: {result}")
```
---
### `execute_on_multiple_gpus(gpu_ids: List[int], func: Callable, *args: Any, **kwargs: Any) -> List[Any]`
**Description:**
Executes a function across multiple GPUs using Ray.
**Arguments:**
- `gpu_ids (List[int])`: The list of GPU IDs to run the function on.
- `func (Callable)`: The function to be executed.
- `*args (Any)`: Positional arguments for the function.
- `**kwargs (Any)`: Keyword arguments for the function.
**Returns:**
- `List[Any]`: A list of results from the execution on each GPU.
**Raises:**
- `ValueError`: If any GPU index is invalid.
- `RuntimeError`: If there is an error executing the function on the GPUs.
**Example Usage:**
```python
result = execute_on_multiple_gpus([0, 1], sample_task, 10)
print(f"Results: {result}")
```
---
### `sample_task(n: int) -> int`
**Description:**
A sample task function that returns the square of a number.
**Arguments:**
- `n (int)`: Input number to be squared.
**Returns:**
- `int`: The square of the input number.
**Example Usage:**
```python
result = sample_task(10)
print(f"Square of 10: {result}")
```
---
This documentation provides a clear description of the function's purpose, arguments, return values, potential exceptions, and examples of how to use them.
---
## Contributing
We welcome contributions to ClusterOps! If you'd like to contribute, please follow these steps:
1. **Fork the repository** on GitHub.
2. **Clone your fork** locally:
```bash
git clone https://github.com/The-Swarm-Corporation/ClusterOps.git
cd clusterops
```
3. **Create a feature branch** for your changes:
```bash
git checkout -b feature/new-feature
```
4. **Install the development dependencies**:
```bash
pip install -r dev-requirements.txt
```
5. **Make your changes**, and be sure to include tests.
6. **Run tests** to ensure everything works:
```bash
pytest
```
7. **Commit your changes** and push them to GitHub:
```bash
git commit -m "Add new feature"
git push origin feature/new-feature
```
8. **Submit a pull request** on GitHub, and we’ll review it as soon as possible.
### Reporting Issues
If you encounter any issues, please create a [GitHub issue](https://github.com/the-swarm-corporation/clusterops/issues).
## Further Documentation
[CLICK HERE](/DOCS.md)
---
## License
ClusterOps is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.
---
## Contact
For any questions, feedback, or contributions, please contact the **Swarms Team** at [kye@swarms.world](mailto:kye@swarms.world).
Raw data
{
"_id": null,
"home_page": "https://github.com/The-Swarm-Corporation/ClusterOps",
"name": "clusterops",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
"author": "Kye Gomez",
"author_email": "kye@apac.ai",
"download_url": "https://files.pythonhosted.org/packages/3e/24/41f79ba4880ec1369a5f78bbb62fe9121dc1e904cb3c961a70a0551741f3/clusterops-0.1.2.tar.gz",
"platform": null,
"description": "# ClusterOps\n\n[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)\n\n[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)\n[![Python Version](https://img.shields.io/badge/python-3.8%2B-brightgreen.svg)](https://python.org)\n[![Build Status](https://img.shields.io/github/actions/workflow/status/swarms-team/clusterops/test.yml?branch=master)](https://github.com/swarms-team/clusterops/actions)\n[![Coverage Status](https://img.shields.io/codecov/c/github/swarms-team/clusterops)](https://codecov.io/gh/swarms-team/clusterops)\n\n\n**ClusterOps** is an enterprise-grade Python library developed and maintained by the **Swarms Team** to help you manage and execute agents on specific **CPUs** and **GPUs** across clusters. This tool enables advanced CPU and GPU selection, dynamic task allocation, and resource monitoring, making it ideal for high-performance distributed computing environments.\n\n\n\n\n\n\n\n---\n\n## Features\n\n- **CPU Execution**: Dynamically assign tasks to specific CPU cores.\n- **GPU Execution**: Execute tasks on specific GPUs or dynamically select the best available GPU based on memory usage.\n- **Fault Tolerance**: Built-in retry logic with exponential backoff for handling transient errors.\n- **Resource Monitoring**: Real-time CPU and GPU resource monitoring (e.g., free memory on GPUs).\n- **Logging**: Advanced logging configuration with customizable log levels (DEBUG, INFO, ERROR).\n- **Scalability**: Supports multi-GPU task execution with Ray for distributed computation.\n\n---\n\n\n## Installation\n\n\n```bash\npip3 install -U clusterops\n```\n\n---\n\n## Quick Start\n\nThe following example demonstrates how to use ClusterOps to run tasks on specific CPUs and GPUs.\n\n```python\nfrom clusterops import (\n list_available_cpus,\n execute_with_cpu_cores,\n list_available_gpus,\n execute_on_gpu,\n execute_on_multiple_gpus,\n)\n\n# Example function to run\ndef sample_task(n: int) -> int:\n return n * n\n\n\n# List CPUs and execute on CPU 0\ncpus = list_available_cpus()\nexecute_on_cpu(0, sample_task, 10)\n\n# List CPUs and execute using 4 CPU cores\nexecute_with_cpu_cores(4, sample_task, 10)\n\n# List GPUs and execute on GPU 0\ngpus = list_available_gpus()\nexecute_on_gpu(0, sample_task, 10)\n\n# Execute across multiple GPUs\nexecute_on_multiple_gpus([0, 1], sample_task, 10)\n\n```\n\n## GPU Scheduler\n\nThe GPU Scheduler is a Ray Serve deployment that manages job execution with fault tolerance, job retries, and scaling. It uses the GPUJobExecutor to execute tasks on available GPUs.\n\nSee the [GPU Scheduler](/clusterops/gpu_scheduler.py) for more details.\n\n```python\nfrom clusterops import gpu_scheduler\n\n\nasync def sample_task(n: int) -> int:\n return n * n\n\n\nprint(gpu_scheduler(sample_task, priority=1, n=10))\n\n```\n\n\n---\n\n## Configuration\n\nClusterOps provides configuration through environment variables, making it adaptable for different environments (development, staging, production).\n\n### Environment Variables\n\n- **`LOG_LEVEL`**: Configures logging verbosity. Options: `DEBUG`, `INFO`, `ERROR`. Default is `INFO`.\n- **`RETRY_COUNT`**: Number of times to retry a task in case of failure. Default is 3.\n- **`RETRY_DELAY`**: Initial delay in seconds before retrying. Default is 1 second.\n\nSet these variables in your environment:\n\n```bash\nexport LOG_LEVEL=DEBUG\nexport RETRY_COUNT=5\nexport RETRY_DELAY=2.0\n```\n\n-----\n\n## Docs\n\n---\n\n### `list_available_cpus() -> List[int]`\n\n**Description:** \nLists all available CPU cores on the system.\n\n**Returns:** \n- `List[int]`: A list of available CPU core indices.\n\n**Raises:** \n- `RuntimeError`: If no CPUs are found.\n\n**Example Usage:**\n\n```python\ncpus = list_available_cpus()\nprint(f\"Available CPUs: {cpus}\")\n```\n\n---\n\n### `select_best_gpu() -> Optional[int]`\n\n**Description:** \nSelects the GPU with the most free memory.\n\n**Returns:** \n- `Optional[int]`: The GPU ID of the best available GPU, or `None` if no GPUs are available.\n\n**Example Usage:**\n\n```python\nbest_gpu = select_best_gpu()\nprint(f\"Best GPU ID: {best_gpu}\")\n```\n\n---\n\n### `execute_on_cpu(cpu_id: int, func: Callable, *args: Any, **kwargs: Any) -> Any`\n\n**Description:** \nExecutes a function on a specific CPU core.\n\n**Arguments:** \n- `cpu_id (int)`: The CPU core to run the function on.\n- `func (Callable)`: The function to be executed.\n- `*args (Any)`: Positional arguments for the function.\n- `**kwargs (Any)`: Keyword arguments for the function.\n\n**Returns:** \n- `Any`: The result of the function execution.\n\n**Raises:** \n- `ValueError`: If the CPU core specified is invalid.\n- `RuntimeError`: If there is an error executing the function on the CPU.\n\n**Example Usage:**\n\n```python\nresult = execute_on_cpu(0, sample_task, 10)\nprint(f\"Result: {result}\")\n```\n\n---\n\n### `retry_with_backoff(func: Callable, retries: int = RETRY_COUNT, delay: float = RETRY_DELAY, *args: Any, **kwargs: Any) -> Any`\n\n**Description:** \nRetries a function with exponential backoff in case of failure.\n\n**Arguments:** \n- `func (Callable)`: The function to execute with retries.\n- `retries (int)`: Number of retries. Defaults to `RETRY_COUNT`.\n- `delay (float)`: Delay between retries in seconds. Defaults to `RETRY_DELAY`.\n- `*args (Any)`: Positional arguments for the function.\n- `**kwargs (Any)`: Keyword arguments for the function.\n\n**Returns:** \n- `Any`: The result of the function execution.\n\n**Raises:** \n- `Exception`: After all retries fail.\n\n**Example Usage:**\n\n```python\nresult = retry_with_backoff(sample_task, retries=5, delay=2, n=10)\nprint(f\"Result after retries: {result}\")\n```\n\n---\n\n### `execute_with_cpu_cores(core_count: int, func: Callable, *args: Any, **kwargs: Any) -> Any`\n\n**Description:** \nExecutes a function using a specified number of CPU cores.\n\n**Arguments:** \n- `core_count (int)`: The number of CPU cores to run the function on.\n- `func (Callable)`: The function to be executed.\n- `*args (Any)`: Positional arguments for the function.\n- `**kwargs (Any)`: Keyword arguments for the function.\n\n**Returns:** \n- `Any`: The result of the function execution.\n\n**Raises:** \n- `ValueError`: If the number of CPU cores specified is invalid or exceeds available cores.\n- `RuntimeError`: If there is an error executing the function on the specified CPU cores.\n\n**Example Usage:**\n\n```python\nresult = execute_with_cpu_cores(4, sample_task, 10)\nprint(f\"Result: {result}\")\n```\n\n---\n\n### `list_available_gpus() -> List[str]`\n\n**Description:** \nLists all available GPUs on the system.\n\n**Returns:** \n- `List[str]`: A list of available GPU names.\n\n**Raises:** \n- `RuntimeError`: If no GPUs are found.\n\n**Example Usage:**\n\n```python\ngpus = list_available_gpus()\nprint(f\"Available GPUs: {gpus}\")\n```\n\n---\n\n### `execute_on_gpu(gpu_id: int, func: Callable, *args: Any, **kwargs: Any) -> Any`\n\n**Description:** \nExecutes a function on a specific GPU using Ray.\n\n**Arguments:** \n- `gpu_id (int)`: The GPU to run the function on.\n- `func (Callable)`: The function to be executed.\n- `*args (Any)`: Positional arguments for the function.\n- `**kwargs (Any)`: Keyword arguments for the function.\n\n**Returns:** \n- `Any`: The result of the function execution.\n\n**Raises:** \n- `ValueError`: If the GPU index is invalid.\n- `RuntimeError`: If there is an error executing the function on the GPU.\n\n**Example Usage:**\n\n```python\nresult = execute_on_gpu(0, sample_task, 10)\nprint(f\"Result: {result}\")\n```\n\n---\n\n### `execute_on_multiple_gpus(gpu_ids: List[int], func: Callable, *args: Any, **kwargs: Any) -> List[Any]`\n\n**Description:** \nExecutes a function across multiple GPUs using Ray.\n\n**Arguments:** \n- `gpu_ids (List[int])`: The list of GPU IDs to run the function on.\n- `func (Callable)`: The function to be executed.\n- `*args (Any)`: Positional arguments for the function.\n- `**kwargs (Any)`: Keyword arguments for the function.\n\n**Returns:** \n- `List[Any]`: A list of results from the execution on each GPU.\n\n**Raises:** \n- `ValueError`: If any GPU index is invalid.\n- `RuntimeError`: If there is an error executing the function on the GPUs.\n\n**Example Usage:**\n\n```python\nresult = execute_on_multiple_gpus([0, 1], sample_task, 10)\nprint(f\"Results: {result}\")\n```\n\n---\n\n### `sample_task(n: int) -> int`\n\n**Description:** \nA sample task function that returns the square of a number.\n\n**Arguments:** \n- `n (int)`: Input number to be squared.\n\n**Returns:** \n- `int`: The square of the input number.\n\n**Example Usage:**\n\n```python\nresult = sample_task(10)\nprint(f\"Square of 10: {result}\")\n```\n\n---\n\nThis documentation provides a clear description of the function's purpose, arguments, return values, potential exceptions, and examples of how to use them.\n\n\n---\n\n## Contributing\n\nWe welcome contributions to ClusterOps! If you'd like to contribute, please follow these steps:\n\n1. **Fork the repository** on GitHub.\n2. **Clone your fork** locally:\n ```bash\n git clone https://github.com/The-Swarm-Corporation/ClusterOps.git\n cd clusterops\n ```\n3. **Create a feature branch** for your changes:\n ```bash\n git checkout -b feature/new-feature\n ```\n4. **Install the development dependencies**:\n ```bash\n pip install -r dev-requirements.txt\n ```\n5. **Make your changes**, and be sure to include tests.\n6. **Run tests** to ensure everything works:\n ```bash\n pytest\n ```\n7. **Commit your changes** and push them to GitHub:\n ```bash\n git commit -m \"Add new feature\"\n git push origin feature/new-feature\n ```\n8. **Submit a pull request** on GitHub, and we\u2019ll review it as soon as possible.\n\n### Reporting Issues\n\nIf you encounter any issues, please create a [GitHub issue](https://github.com/the-swarm-corporation/clusterops/issues).\n\n\n## Further Documentation\n\n[CLICK HERE](/DOCS.md)\n\n---\n\n## License\n\nClusterOps is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.\n\n---\n\n## Contact\n\nFor any questions, feedback, or contributions, please contact the **Swarms Team** at [kye@swarms.world](mailto:kye@swarms.world).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Paper - Pytorch",
"version": "0.1.2",
"project_urls": {
"Documentation": "https://github.com/The-Swarm-Corporation/ClusterOps",
"Homepage": "https://github.com/The-Swarm-Corporation/ClusterOps",
"Repository": "https://github.com/The-Swarm-Corporation/ClusterOps"
},
"split_keywords": [
"artificial intelligence",
" deep learning",
" optimizers",
" prompt engineering"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7835d832f02b9dc1e8eceb9031e6b2f621279719cc7210bd04a6d7abde10fe53",
"md5": "233db2df58361f089b7e037608d3dfc8",
"sha256": "856a57ee5051292fece65c9b0c503eb0bba9415761cd978c7620a51f07f3c61e"
},
"downloads": -1,
"filename": "clusterops-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "233db2df58361f089b7e037608d3dfc8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 20911,
"upload_time": "2024-11-19T19:34:25",
"upload_time_iso_8601": "2024-11-19T19:34:25.526995Z",
"url": "https://files.pythonhosted.org/packages/78/35/d832f02b9dc1e8eceb9031e6b2f621279719cc7210bd04a6d7abde10fe53/clusterops-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3e2441f79ba4880ec1369a5f78bbb62fe9121dc1e904cb3c961a70a0551741f3",
"md5": "a884ea6a322db9c9fbf48ff3b6c06f95",
"sha256": "978c4d9afc4f0554d2284b13392b746d2288c73ed464abc0d8738c6f29140227"
},
"downloads": -1,
"filename": "clusterops-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "a884ea6a322db9c9fbf48ff3b6c06f95",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 20477,
"upload_time": "2024-11-19T19:34:27",
"upload_time_iso_8601": "2024-11-19T19:34:27.234565Z",
"url": "https://files.pythonhosted.org/packages/3e/24/41f79ba4880ec1369a5f78bbb62fe9121dc1e904cb3c961a70a0551741f3/clusterops-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-19 19:34:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "The-Swarm-Corporation",
"github_project": "ClusterOps",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "clusterops"
}