Name | gpu-sentinel JSON |
Version |
0.1.4
JSON |
| download |
home_page | |
Summary | Monitor idle GPU usage. |
upload_time | 2023-01-16 23:22:11 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.7 |
license | MIT |
keywords |
gpu
monitor
utilization
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# GPU Sentinel
*A Moonshine Labs tool*
## Overview
If you're automating training your large models in the cloud, cost control is critial. How many times have you accidentally left an expensive GPU instance running when the underlying job had crashed, costing you money or capacity with no benefit?
*GPU Sentinel* is a simple tool that will watch your instance and automatically trigger when GPU utilization drops below a certain amount for a period of time. GPU Sentinel can automatically shutdown or reboot the instance, or simply end its own process so you can do an action yourself.
## Installation
```
$ pip install gpu_sentinel
$ gpu_sentinel --help
```
## Usage
The GPU sentinel has two states, IDLE and ARMED.
When you start the program, it will wait for the GPU to be above a certain utilization for a set amount of time. Once this condition is met, the sentinel will be ARMED. This will let you set the sentinel at any point, and it will only trigger once the GPU has been running for a while.
Once ARMED, the sentinel will wait for the GPU utilization to drop below a certain threshold for a set amount of time. Once this condition is met, the `kill_action` will occur immediately.
Options:
```
arm_duration: How many seconds of activity to wait before arming the sentinel.
arm_threshold: What level of utilization is considered activity
kill_duration: How many seconds of inactivity to wait before running the kill function.
kill_threshold: What level of utilization is considered inactivity
kill_action: What to do when the kill trigger is hit {end_process,shutdown,reboot}
gpu_devices: Which GPU devices to average (empty for all)
```
## API
If you would prefer to use integrate this package into your own code, we provide a straightforward API to do so.
```python
from gpu_sentinel import Sentinel, get_gpu_usage
def my_callback_fn():
print("Triggered!")
exit()
# Create the sentinel that watches the values.
sentinel = Sentinel(
arm_duration=10,
arm_threshold=0.7,
kill_duration=60,
kill_threshold=0.7,
kill_fn=my_callback_fn,
)
while True:
# This is the averaged GPU usage of the devices.
gpu_usage = get_gpu_usage(device_ids=[0, 1, 2, 3])
# Add the GPU usage to the sentinel's next state.
sentinel.tick(gpu_usage)
# The sentinel operates on ticks, not seconds, so if we want to check every second
# we must do the timer ourselves.
time.sleep(1)
```
## Current Limitations
* To shutdown/reboot the machine, GPU Sentinel requires sudo permissions or sudo-less shutdown.
* Currently only working on Linux, can add Windows support if there's interest.
Raw data
{
"_id": null,
"home_page": "",
"name": "gpu-sentinel",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "gpu,monitor,utilization",
"author": "",
"author_email": "Nate Harada <gpu_sentinel@moonshinelabs.ai>",
"download_url": "https://files.pythonhosted.org/packages/ee/c4/c6bf93bd41397ce7a10ca9e9b18c9f69f355ad1a4b666c1eba09b8ea95b7/gpu_sentinel-0.1.4.tar.gz",
"platform": null,
"description": "# GPU Sentinel\n\n*A Moonshine Labs tool*\n\n## Overview\nIf you're automating training your large models in the cloud, cost control is critial. How many times have you accidentally left an expensive GPU instance running when the underlying job had crashed, costing you money or capacity with no benefit?\n\n*GPU Sentinel* is a simple tool that will watch your instance and automatically trigger when GPU utilization drops below a certain amount for a period of time. GPU Sentinel can automatically shutdown or reboot the instance, or simply end its own process so you can do an action yourself.\n\n## Installation\n```\n$ pip install gpu_sentinel\n$ gpu_sentinel --help\n```\n\n## Usage\nThe GPU sentinel has two states, IDLE and ARMED.\n\nWhen you start the program, it will wait for the GPU to be above a certain utilization for a set amount of time. Once this condition is met, the sentinel will be ARMED. This will let you set the sentinel at any point, and it will only trigger once the GPU has been running for a while.\n\nOnce ARMED, the sentinel will wait for the GPU utilization to drop below a certain threshold for a set amount of time. Once this condition is met, the `kill_action` will occur immediately.\n\nOptions:\n\n```\narm_duration: How many seconds of activity to wait before arming the sentinel.\narm_threshold: What level of utilization is considered activity\nkill_duration: How many seconds of inactivity to wait before running the kill function.\nkill_threshold: What level of utilization is considered inactivity\nkill_action: What to do when the kill trigger is hit {end_process,shutdown,reboot}\ngpu_devices: Which GPU devices to average (empty for all)\n```\n\n## API\nIf you would prefer to use integrate this package into your own code, we provide a straightforward API to do so.\n\n```python\nfrom gpu_sentinel import Sentinel, get_gpu_usage\n\ndef my_callback_fn():\n print(\"Triggered!\")\n exit()\n\n# Create the sentinel that watches the values.\nsentinel = Sentinel(\n arm_duration=10,\n arm_threshold=0.7,\n kill_duration=60,\n kill_threshold=0.7,\n kill_fn=my_callback_fn,\n)\n\nwhile True:\n # This is the averaged GPU usage of the devices.\n gpu_usage = get_gpu_usage(device_ids=[0, 1, 2, 3])\n # Add the GPU usage to the sentinel's next state.\n sentinel.tick(gpu_usage)\n # The sentinel operates on ticks, not seconds, so if we want to check every second\n # we must do the timer ourselves.\n time.sleep(1)\n```\n\n## Current Limitations\n\n* To shutdown/reboot the machine, GPU Sentinel requires sudo permissions or sudo-less shutdown.\n* Currently only working on Linux, can add Windows support if there's interest.",
"bugtrack_url": null,
"license": "MIT",
"summary": "Monitor idle GPU usage.",
"version": "0.1.4",
"split_keywords": [
"gpu",
"monitor",
"utilization"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a810f1bf2b11227ed2a5632ecda66df2495dfbb7c546d73113c87bfff548be08",
"md5": "5da8fb671bf77f90d851f34484804c63",
"sha256": "615abff10bc5769e506f72d1653753c910b3251001293d9892de84772f6ac5a5"
},
"downloads": -1,
"filename": "gpu_sentinel-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5da8fb671bf77f90d851f34484804c63",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 7124,
"upload_time": "2023-01-16T23:22:09",
"upload_time_iso_8601": "2023-01-16T23:22:09.954180Z",
"url": "https://files.pythonhosted.org/packages/a8/10/f1bf2b11227ed2a5632ecda66df2495dfbb7c546d73113c87bfff548be08/gpu_sentinel-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "eec4c6bf93bd41397ce7a10ca9e9b18c9f69f355ad1a4b666c1eba09b8ea95b7",
"md5": "b99577ac5e86e250cfc2201c0fc39bfb",
"sha256": "c25eb6c75fd538f2b60fa9b95ad77abfa1227da452b51e91b943ea04a7ecc946"
},
"downloads": -1,
"filename": "gpu_sentinel-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "b99577ac5e86e250cfc2201c0fc39bfb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 5712,
"upload_time": "2023-01-16T23:22:11",
"upload_time_iso_8601": "2023-01-16T23:22:11.931518Z",
"url": "https://files.pythonhosted.org/packages/ee/c4/c6bf93bd41397ce7a10ca9e9b18c9f69f355ad1a4b666c1eba09b8ea95b7/gpu_sentinel-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-16 23:22:11",
"github": false,
"gitlab": false,
"bitbucket": false,
"lcname": "gpu-sentinel"
}