deepsea-ai


Namedeepsea-ai JSON
Version 1.25.0 PyPI version JSON
download
home_pagehttps://github.com/mbari-org/deepsea-ai
SummaryDeepSeaAI is a Python package to simplify processing deep sea video in AWS from a command line.
upload_time2024-04-25 01:20:11
maintainerNone
docs_urlNone
authorDanelle Cline
requires_python<3.12,>=3.10
licenseApache License 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
[![MBARI](https://www.mbari.org/wp-content/uploads/2014/11/logo-mbari-3b.png)](http://www.mbari.org)
[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Python](https://img.shields.io/badge/language-Python-blue.svg)](https://www.python.org/downloads/)

**DeepSeaAI** is a Python package to simplify processing deep sea video in [AWS](https://aws.amazon.com) from a command line. 
 
It includes reasonable defaults that have been optimized for deep sea video. The goal is to simplify running these algorithms in AWS.

DeepSea-AI currently supports:

 - *Training [YOLOv5](http://github.com/ultralytics/yolov5) object detection* models with up to 8 GPUs using the best available instances in AWS
 - Processing video with [YOLOv5](http://github.com/ultralytics/yolov5) detection and tracking pipeline using 
     * [StrongSort](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet) tracking
 - Scaling processing with [AWS Elastic Container Service](https://aws.amazon.com/ecs/)
[click image below to see larger example]
[![ Image link ](docs/imgs/ecs_arch_small.png)](docs/imgs/ecs_arch.png)

The cost to process a video is typically less than **$1.25** per 1-hour video using a model designed for a 640 pixel size.

The cost to train a YOLOv5 model depends on your data size and the number of GPUs you use.A large collection with 30K images and 
300K localizations may cost **$300-$600** to process, depending on the instance you choose to train on. This is reasonably small for a 
research project, and small in comparison to purchasing your own GPU hardware.

See the full documentation at [MBARI deepsea-ai](http://docs.mbari.org/deepsea-ai).
 
## Processing
The processing technology uses the [AWS Elastic Container Service](https://aws.amazon.com/ecs/) with an architecture
that includes a SQS messaging queue to start the processing. Simply upload a video 
to an S3 bucket then submit a job with the location of that video to the queue to 
start processing. The result is returned to a S3 bucket and the video is optionally 
removed to reduce storage cost.


## Getting Started
## Install

There are two main requirements to use this:

1.  [An account with AWS Amazon Web Services](https://aws.amazon.com).
2.  [An account with Docker](http://docker.com).
3.  Install and update using [pip](https://pip.pypa.io/en/stable/getting-started/) in a Python>=3.8.0 environment:

After you have setup your AWS account, configure it using the awscli tool  

```
pip install awscli
aws configure
aws --version
```

Then install the module

```shell
pip install -U deepsea-ai
```

Setting up the AWS environment is done with the setup mirror command.  This only needs to be done once, or when you upgrade
the module.   This command will setup the appropriate AWS permissions and mirror the images used in the commands
from [Docker Hub](https://hub.docker.com) to your ECR Elastic Container Registry. 

Be patient - this takes a while, but only needs to be run once.

```shell
deepsea-ai setup --mirror
```

## Tutorials

* [FathomNet](docs/notebooks/fathomnet_train.ipynb) ✨ Recommended first step to learn more about how to train a YOLOv5 object detection model using freely available FathomNet data

### Create the Anaconda environment

The fastest way to get started is to use the Anaconda environment.  This will create a conda environment called *deepsea-ai* and make that available in your local jupyter notebook as the kernel named *deepsea-ai*

```shell
conda env create 
conda activate deepsea-ai
pip install ipykernel
python -m ipykernel install --user --name=deepsea-ai
```

### Launch jupyter

```
cd docs/notebooks
jupyter notebook
```
---

## Commands

* `deepsea-ai setup --help` - Setup the AWS environment. Must run this once before any other commands.
* [`deepsea-ai train --help` - Train a YOLOv5 model and save the model to a bucket](docs/commands/train.md)
* [`deepsea-ai process --help` - Process one or more videos and save the results to  a bucket](docs/commands/process.md)
* [`deepsea-ai ecsprocess --help` - Process one or more videos using the Elastic Container Service and save the results to a bucket](docs/commands/process.md)
* [`deepsea-ai split --help` - Split your training data. This is required before the train command.](docs/data.md) 
* [`deepsea-ai monitor --help` - Monitor processing. Use this after the ecsprocess train command.](docs/commands/monitor.md)
* `deepsea-ai -h` - Print help message and exit.

## Setting up an Elastic Container Service (ECS) cluster 

To process videos in bulk, you can setup an ECS cluster to process videos in parallel.
See the [ECS setup documentation](docs/commands/ecsdeploy.md) for more details.

---
Source code is available at [github.com/mbari-org/deepsea-ai](https://github.com/mbari-org/deepsea-ai/).
  
For more details, see the [official documentation](http://docs.mbari.org/deepsea-ai/install).
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mbari-org/deepsea-ai",
    "name": "deepsea-ai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Danelle Cline",
    "author_email": "dcline@mbari.org",
    "download_url": "https://files.pythonhosted.org/packages/77/74/f9a2b8f7ccfeb3cef21875e294f7844fc9856eedea86f8c14c4fce0fa055/deepsea_ai-1.25.0.tar.gz",
    "platform": null,
    "description": "\n[![MBARI](https://www.mbari.org/wp-content/uploads/2014/11/logo-mbari-3b.png)](http://www.mbari.org)\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Python](https://img.shields.io/badge/language-Python-blue.svg)](https://www.python.org/downloads/)\n\n**DeepSeaAI** is a Python package to simplify processing deep sea video in [AWS](https://aws.amazon.com) from a command line. \n \nIt includes reasonable defaults that have been optimized for deep sea video. The goal is to simplify running these algorithms in AWS.\n\nDeepSea-AI currently supports:\n\n - *Training [YOLOv5](http://github.com/ultralytics/yolov5) object detection* models with up to 8 GPUs using the best available instances in AWS\n - Processing video with [YOLOv5](http://github.com/ultralytics/yolov5) detection and tracking pipeline using \n     * [StrongSort](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet) tracking\n - Scaling processing with [AWS Elastic Container Service](https://aws.amazon.com/ecs/)\n[click image below to see larger example]\n[![ Image link ](docs/imgs/ecs_arch_small.png)](docs/imgs/ecs_arch.png)\n\nThe cost to process a video is typically less than **$1.25** per 1-hour video using a model designed for a 640 pixel size.\n\nThe cost to train a YOLOv5 model depends on your data size and the number of GPUs you use.A large collection with 30K images and \n300K localizations may cost **$300-$600** to process, depending on the instance you choose to train on. This is reasonably small for a \nresearch project, and small in comparison to purchasing your own GPU hardware.\n\nSee the full documentation at [MBARI deepsea-ai](http://docs.mbari.org/deepsea-ai).\n \n## Processing\nThe processing technology uses the [AWS Elastic Container Service](https://aws.amazon.com/ecs/) with an architecture\nthat includes a SQS messaging queue to start the processing. Simply upload a video \nto an S3 bucket then submit a job with the location of that video to the queue to \nstart processing. The result is returned to a S3 bucket and the video is optionally \nremoved to reduce storage cost.\n\n\n## Getting Started\n## Install\n\nThere are two main requirements to use this:\n\n1.  [An account with AWS Amazon Web Services](https://aws.amazon.com).\n2.  [An account with Docker](http://docker.com).\n3.  Install and update using [pip](https://pip.pypa.io/en/stable/getting-started/) in a Python>=3.8.0 environment:\n\nAfter you have setup your AWS account, configure it using the awscli tool  \n\n```\npip install awscli\naws configure\naws --version\n```\n\nThen install the module\n\n```shell\npip install -U deepsea-ai\n```\n\nSetting up the AWS environment is done with the setup mirror command.  This only needs to be done once, or when you upgrade\nthe module.   This command will setup the appropriate AWS permissions and mirror the images used in the commands\nfrom [Docker Hub](https://hub.docker.com) to your ECR Elastic Container Registry. \n\nBe patient - this takes a while, but only needs to be run once.\n\n```shell\ndeepsea-ai setup --mirror\n```\n\n## Tutorials\n\n* [FathomNet](docs/notebooks/fathomnet_train.ipynb) \u2728 Recommended first step to learn more about how to train a YOLOv5 object detection model using freely available FathomNet data\n\n### Create the Anaconda environment\n\nThe fastest way to get started is to use the Anaconda environment.  This will create a conda environment called *deepsea-ai* and make that available in your local jupyter notebook as the kernel named *deepsea-ai*\n\n```shell\nconda env create \nconda activate deepsea-ai\npip install ipykernel\npython -m ipykernel install --user --name=deepsea-ai\n```\n\n### Launch jupyter\n\n```\ncd docs/notebooks\njupyter notebook\n```\n---\n\n## Commands\n\n* `deepsea-ai setup --help` - Setup the AWS environment. Must run this once before any other commands.\n* [`deepsea-ai train --help` - Train a YOLOv5 model and save the model to a bucket](docs/commands/train.md)\n* [`deepsea-ai process --help` - Process one or more videos and save the results to  a bucket](docs/commands/process.md)\n* [`deepsea-ai ecsprocess --help` - Process one or more videos using the Elastic Container Service and save the results to a bucket](docs/commands/process.md)\n* [`deepsea-ai split --help` - Split your training data. This is required before the train command.](docs/data.md) \n* [`deepsea-ai monitor --help` - Monitor processing. Use this after the ecsprocess train command.](docs/commands/monitor.md)\n* `deepsea-ai -h` - Print help message and exit.\n\n## Setting up an Elastic Container Service (ECS) cluster \n\nTo process videos in bulk, you can setup an ECS cluster to process videos in parallel.\nSee the [ECS setup documentation](docs/commands/ecsdeploy.md) for more details.\n\n---\nSource code is available at [github.com/mbari-org/deepsea-ai](https://github.com/mbari-org/deepsea-ai/).\n  \nFor more details, see the [official documentation](http://docs.mbari.org/deepsea-ai/install).",
    "bugtrack_url": null,
    "license": "Apache License 2.0 ",
    "summary": "DeepSeaAI is a Python package to simplify processing deep sea video in AWS from a command line.",
    "version": "1.25.0",
    "project_urls": {
        "Homepage": "https://github.com/mbari-org/deepsea-ai",
        "Repository": "https://github.com/mbari-org/deepsea-ai"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fbf6df45923fab33e0f25ed00420435578d09149563e8f59c9df0e7e714e5f6",
                "md5": "9c0b648da30caac542a3986c1a425f68",
                "sha256": "95da8bb61d5eb0f3f0497a051372200c077d56552366e984fba9cbd53e2f91ef"
            },
            "downloads": -1,
            "filename": "deepsea_ai-1.25.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c0b648da30caac542a3986c1a425f68",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.10",
            "size": 80973,
            "upload_time": "2024-04-25T01:20:08",
            "upload_time_iso_8601": "2024-04-25T01:20:08.942987Z",
            "url": "https://files.pythonhosted.org/packages/4f/bf/6df45923fab33e0f25ed00420435578d09149563e8f59c9df0e7e714e5f6/deepsea_ai-1.25.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7774f9a2b8f7ccfeb3cef21875e294f7844fc9856eedea86f8c14c4fce0fa055",
                "md5": "f55c6de621e69e838045ee251d3a567d",
                "sha256": "d4d039bbad5a0b71996060a3d5887e340ccb21054cd87e452ef77e25c45fc9d1"
            },
            "downloads": -1,
            "filename": "deepsea_ai-1.25.0.tar.gz",
            "has_sig": false,
            "md5_digest": "f55c6de621e69e838045ee251d3a567d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.10",
            "size": 70005,
            "upload_time": "2024-04-25T01:20:11",
            "upload_time_iso_8601": "2024-04-25T01:20:11.402523Z",
            "url": "https://files.pythonhosted.org/packages/77/74/f9a2b8f7ccfeb3cef21875e294f7844fc9856eedea86f8c14c4fce0fa055/deepsea_ai-1.25.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-25 01:20:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mbari-org",
    "github_project": "deepsea-ai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "deepsea-ai"
}
        
Elapsed time: 5.06077s