Name | degirum-cli JSON |
Version |
0.2.0
JSON |
| download |
home_page | None |
Summary | Degirum AI package with CLI for image and video prediction |
upload_time | 2024-11-14 08:01:33 |
maintainer | None |
docs_url | None |
author | DeGirum |
requires_python | >=3.8 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Degirum CLI
**Degirum CLI** is a command-line tool for running AI inference on images, videos, and benchmarking AI models using
the Degirum PySDK. The CLI provides default configurations for quick use, while allowing you to customize your commands
by providing your own arguments.
## Features
- **Run AI inference on images and videos**: Use pre-trained models from Degirum's model zoo for object detection, face recognition, and more.
- **Benchmark multiple models**: Evaluate the performance of AI models by measuring FPS and efficiency across different configurations.
- **Flexible configuration**: Run commands with sensible defaults, or override options via the command line or a configuration file.
- **Support for extra options**: Pass additional keyword arguments to the inference engine, such as `measure_time=True`, directly from the CLI.
## Installation
1. Clone the repository:
```bash
git clone https://github.com/DeGirum/degirum_cli.git
cd <your-repo-directory>
```
1. Install the required dependencies:
```bash
pip install -r requirements.txt
```
1. Install the package locally:
```bash
pip install .
```
## Usage
### Setting the `DEGIRUM_CLOUD_TOKEN` Environment Variable
To access hardware options and model zoos in the DeGirum Cloud Platform with `degirum_cli`, you need to pass the `DEGIRUM_CLOUD_TOKEN`
variable to the functions. The token can be set as an environment variable instead of passing it as an argument to every function.
For detailed instructions on how to set this environment variable across various systems (including Linux, macOS, Windows, and
virtual environments), please refer to [this guide](https://gist.github.com/shashichilappagari/ab856f4ed85fbfb623bc949cf453925b).
Rest of the user guide below assumes that the token is set as an environment variable. If you prefer not to set it, remember to
pass it as an argument (`--token`) to the various command line utilities described below.
### Running with Defaults
The **Degirum CLI** comes with default values for most options, allowing you to run commands immediately without specifying arguments.
1. **Image Inference (with defaults)**
- You can run AI inference on a default image with a pre-configured model:
```bash
degirum_cli predict-image
```
- This will use the following defaults:
- Inference Host: `@cloud`
- Model Zoo: `degirum/public`
- Model: `yolov8n_relu6_coco--640x640_quant_n2x_orca1_1`
- Image Source: A built-in example image.
1. **Video Inference (with defaults)**
- You can run AI inference on a default video with a pre-configured model:
```bash
degirum_cli predict-video
```
1. **Benchmarking (with defaults)**
- Run the benchmark command with default settings:
```bash
degirum_cli benchmark
```
- This will benchmark multiple default models and use the cloud for inference.
### Using the Help Command
The **Degirum CLI** provides built-in help for all commands, making it easy to see the available options, their descriptions,
and the default values. Use the `--help` flag to display the full details of any command.
For example:
1. **Help for `predict-image` Command**:
```bash
degirum_cli predict-image --help
```
This will show the following information:
```bash
Usage: degirum_cli predict-image [OPTIONS] [EXTRA_ARGS]...
Run AI inference on an image with extra options.
Options:
--inference-host-address TEXT Hardware location for inference (e.g.,
@cloud, @local, IP). [default: @cloud]
--model-zoo-url TEXT URL or path to the model zoo. [default:
degirum/public]
--model-name TEXT Name of the model to use for inference.
[default: yolov8n_relu6_coco--640x640_quant_n2x_orca1_1]
--image-source TEXT Path or URL to the image for inference.
[default: https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/ThreePersons.jpg]
--token TEXT Cloud platform token to use for inference.
Attempts to load from environment if not provided.
--help Show this message and exit.
```
This output provides the default values for each argument and explains the usage of the command.
1. **Help for Other Commands**:
- You can also run the `--help` flag with other commands like `predict-video`, `run-composition` or `benchmark` to see the
specific options available for each.
```bash
degirum_cli predict-video --help
degirum_cli run-composition --help
degirum_cli benchmark --help
```
This feature makes it easy to explore the available options and use the CLI effectively.
### Customizing the Command
Once you're familiar with the defaults, you can override the parameters to fit your needs by passing arguments.
1. **Image Inference with Custom Arguments**
- Example of customizing image inference:
```bash
degirum_cli predict-image --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --image-source /path/to/image.jpg
```
- You can also pass extra arguments as key-value pairs:
```bash
degirum_cli predict-image --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --image-source /path/to/image.jpg measure_time=True
```
1. **Video Inference with Custom Arguments**
- Example of running inference on a video with custom arguments:
```bash
degirum_cli predict-video --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --video-source /path/to/video.mp4
```
1. **Running Gizmo Compositions**
To run gizmo composition you need to define it in the YAML configuration file. Then you pass the YAML file name as `--config-file`
parameter of `run-composition` command:
```bash
degirum_cli run-composition --config-file /path/to/config.yaml
```
Additionally, you may pass `--allow-stop` flag to be able to stop running composition from the terminal by pressing the *Enter* key.
1. **Benchmarking with Custom Configurations**
- Example of customizing the benchmark command:
```bash
degirum_cli benchmark --config-file /path/to/config.yaml --iterations 200 --token your_token measure_time=True
```
- If no configuration file is provided, default model zoo and models are used:
```bash
degirum_cli benchmark --iterations 100 --token your_token
```
- Example configuration file (`config.yaml`):
```yaml
model_zoo_url: degirum/public
model_names:
- mobilenet_v1_imagenet--224x224_quant_n2x_orca1_1
- yolov8n_relu6_coco--640x640_quant_n2x_orca1_1
```
### Command-Line Options
- **`--inference-host-address`**: Specify where to run inference, such as `@cloud` for cloud servers or IP addresses for local servers.
- **`--model-zoo-url`**: URL or path to the model zoo for loading pre-trained models.
- **`--model-name`**: Specify the name of the model to use for inference.
- **`--image-source` / `--video-source`**: Path to the image or video file to be used for inference.
- **`--iterations`**: Number of iterations for benchmarking.
- **`--token`**: Provide your Degirum cloud platform token.
- **Additional arguments**: Pass additional options (e.g., `measure_time=True`) for fine-tuning the inference process.
## Getting Started
1. **Run Image Inference (Default Command)**:
```bash
degirum_cli predict-image
```
1. **Run Video Inference (Default Command)**:
```bash
degirum_cli predict-video
```
1. **Run Benchmarking (Default Command)**:
```bash
degirum_cli benchmark
```
1. **Run Image Inference with Custom Arguments**:
```bash
degirum_cli predict-image --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --image-source /path/to/image.jpg
```
### Notes:
- If no `token` is provided via CLI or environment, an error will be raised, so make sure to set your Degirum cloud token properly.
Raw data
{
"_id": null,
"home_page": null,
"name": "degirum-cli",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "DeGirum",
"author_email": null,
"download_url": null,
"platform": null,
"description": "# Degirum CLI\n\n**Degirum CLI** is a command-line tool for running AI inference on images, videos, and benchmarking AI models using \nthe Degirum PySDK. The CLI provides default configurations for quick use, while allowing you to customize your commands \nby providing your own arguments.\n\n## Features\n\n- **Run AI inference on images and videos**: Use pre-trained models from Degirum's model zoo for object detection, face recognition, and more.\n- **Benchmark multiple models**: Evaluate the performance of AI models by measuring FPS and efficiency across different configurations.\n- **Flexible configuration**: Run commands with sensible defaults, or override options via the command line or a configuration file.\n- **Support for extra options**: Pass additional keyword arguments to the inference engine, such as `measure_time=True`, directly from the CLI.\n\n## Installation\n\n1. Clone the repository:\n ```bash\n git clone https://github.com/DeGirum/degirum_cli.git\n cd <your-repo-directory>\n ```\n\n1. Install the required dependencies:\n ```bash\n pip install -r requirements.txt\n ```\n\n1. Install the package locally:\n ```bash\n pip install .\n ```\n\n## Usage\n\n### Setting the `DEGIRUM_CLOUD_TOKEN` Environment Variable\n\nTo access hardware options and model zoos in the DeGirum Cloud Platform with `degirum_cli`, you need to pass the `DEGIRUM_CLOUD_TOKEN` \nvariable to the functions. The token can be set as an environment variable instead of passing it as an argument to every function. \nFor detailed instructions on how to set this environment variable across various systems (including Linux, macOS, Windows, and \nvirtual environments), please refer to [this guide](https://gist.github.com/shashichilappagari/ab856f4ed85fbfb623bc949cf453925b). \nRest of the user guide below assumes that the token is set as an environment variable. If you prefer not to set it, remember to \npass it as an argument (`--token`) to the various command line utilities described below.\n\n\n### Running with Defaults\n\nThe **Degirum CLI** comes with default values for most options, allowing you to run commands immediately without specifying arguments.\n\n1. **Image Inference (with defaults)**\n - You can run AI inference on a default image with a pre-configured model:\n ```bash\n degirum_cli predict-image\n ```\n - This will use the following defaults:\n - Inference Host: `@cloud`\n - Model Zoo: `degirum/public`\n - Model: `yolov8n_relu6_coco--640x640_quant_n2x_orca1_1`\n - Image Source: A built-in example image.\n\n1. **Video Inference (with defaults)**\n - You can run AI inference on a default video with a pre-configured model:\n ```bash\n degirum_cli predict-video\n ```\n\n1. **Benchmarking (with defaults)**\n - Run the benchmark command with default settings:\n ```bash\n degirum_cli benchmark\n ```\n - This will benchmark multiple default models and use the cloud for inference.\n\n### Using the Help Command\n\nThe **Degirum CLI** provides built-in help for all commands, making it easy to see the available options, their descriptions, \nand the default values. Use the `--help` flag to display the full details of any command.\n\nFor example:\n\n1. **Help for `predict-image` Command**:\n ```bash\n degirum_cli predict-image --help\n ```\n\n This will show the following information:\n\n ```bash\n Usage: degirum_cli predict-image [OPTIONS] [EXTRA_ARGS]...\n\n Run AI inference on an image with extra options.\n\n Options:\n --inference-host-address TEXT Hardware location for inference (e.g.,\n @cloud, @local, IP). [default: @cloud]\n --model-zoo-url TEXT URL or path to the model zoo. [default:\n degirum/public]\n --model-name TEXT Name of the model to use for inference.\n [default: yolov8n_relu6_coco--640x640_quant_n2x_orca1_1]\n --image-source TEXT Path or URL to the image for inference.\n [default: https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/ThreePersons.jpg]\n --token TEXT Cloud platform token to use for inference.\n Attempts to load from environment if not provided.\n --help Show this message and exit.\n ```\n\n This output provides the default values for each argument and explains the usage of the command.\n\n1. **Help for Other Commands**:\n - You can also run the `--help` flag with other commands like `predict-video`, `run-composition` or `benchmark` to see the \n specific options available for each.\n\n ```bash\n degirum_cli predict-video --help\n degirum_cli run-composition --help\n degirum_cli benchmark --help\n\n ```\n\nThis feature makes it easy to explore the available options and use the CLI effectively.\n\n### Customizing the Command\n\nOnce you're familiar with the defaults, you can override the parameters to fit your needs by passing arguments.\n\n1. **Image Inference with Custom Arguments**\n - Example of customizing image inference:\n ```bash\n degirum_cli predict-image --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --image-source /path/to/image.jpg\n ```\n\n - You can also pass extra arguments as key-value pairs:\n ```bash\n degirum_cli predict-image --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --image-source /path/to/image.jpg measure_time=True\n ```\n\n1. **Video Inference with Custom Arguments**\n - Example of running inference on a video with custom arguments:\n ```bash\n degirum_cli predict-video --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --video-source /path/to/video.mp4\n ```\n\n1. **Running Gizmo Compositions**\n\n To run gizmo composition you need to define it in the YAML configuration file. Then you pass the YAML file name as `--config-file`\n parameter of `run-composition` command:\n\n ```bash\n degirum_cli run-composition --config-file /path/to/config.yaml\n ```\n\n Additionally, you may pass `--allow-stop` flag to be able to stop running composition from the terminal by pressing the *Enter* key.\n\n1. **Benchmarking with Custom Configurations**\n - Example of customizing the benchmark command:\n ```bash\n degirum_cli benchmark --config-file /path/to/config.yaml --iterations 200 --token your_token measure_time=True\n ```\n\n - If no configuration file is provided, default model zoo and models are used:\n ```bash\n degirum_cli benchmark --iterations 100 --token your_token\n ```\n\n - Example configuration file (`config.yaml`):\n ```yaml\n model_zoo_url: degirum/public\n model_names:\n - mobilenet_v1_imagenet--224x224_quant_n2x_orca1_1\n - yolov8n_relu6_coco--640x640_quant_n2x_orca1_1\n ```\n\n### Command-Line Options\n\n- **`--inference-host-address`**: Specify where to run inference, such as `@cloud` for cloud servers or IP addresses for local servers.\n- **`--model-zoo-url`**: URL or path to the model zoo for loading pre-trained models.\n- **`--model-name`**: Specify the name of the model to use for inference.\n- **`--image-source` / `--video-source`**: Path to the image or video file to be used for inference.\n- **`--iterations`**: Number of iterations for benchmarking.\n- **`--token`**: Provide your Degirum cloud platform token.\n- **Additional arguments**: Pass additional options (e.g., `measure_time=True`) for fine-tuning the inference process.\n\n## Getting Started\n\n1. **Run Image Inference (Default Command)**:\n ```bash\n degirum_cli predict-image\n ```\n\n1. **Run Video Inference (Default Command)**:\n ```bash\n degirum_cli predict-video\n ```\n\n1. **Run Benchmarking (Default Command)**:\n ```bash\n degirum_cli benchmark\n ```\n\n1. **Run Image Inference with Custom Arguments**:\n ```bash\n degirum_cli predict-image --inference-host-address @cloud --model-zoo-url degirum/public --model-name yolov8n_relu6_coco--640x640_quant_n2x_orca1_1 --image-source /path/to/image.jpg\n ```\n\n### Notes:\n- If no `token` is provided via CLI or environment, an error will be raised, so make sure to set your Degirum cloud token properly.\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Degirum AI package with CLI for image and video prediction",
"version": "0.2.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "dd80454c189afc847c93a59e815f41c7764c8c2687b7663c052eaa5e7912b0b9",
"md5": "0a3014b85f1a342c97c5bbb21a16ef2f",
"sha256": "5121f69081db3ed2577916ca4899ffeceea793b4dd7c0c5ca41cc18d72a3bca3"
},
"downloads": -1,
"filename": "degirum_cli-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0a3014b85f1a342c97c5bbb21a16ef2f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 10468,
"upload_time": "2024-11-14T08:01:33",
"upload_time_iso_8601": "2024-11-14T08:01:33.252618Z",
"url": "https://files.pythonhosted.org/packages/dd/80/454c189afc847c93a59e815f41c7764c8c2687b7663c052eaa5e7912b0b9/degirum_cli-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-14 08:01:33",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "degirum-cli"
}