roboml


Nameroboml JSON
Version 0.2.3 PyPI version JSON
download
home_pageNone
SummaryMachine learning models optimized for robotics experimentation and deployment
upload_time2024-12-06 12:10:00
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords robots robotics machine learning multimodal deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RoboML 🤖
[![PyPI][pypi-badge]][pypi-url]
[![MIT licensed][mit-badge]][mit-url]
[![Python Version][python-badge]][python-url]

[pypi-badge]: https://img.shields.io/pypi/v/roboml.svg
[pypi-url]: https://pypi.org/project/roboml/
[mit-badge]: https://img.shields.io/pypi/l/roboml.svg
[mit-url]: https://github.com/automatika-robotics/roboml/LICENSE
[python-badge]: https://img.shields.io/pypi/pyversions/roboml.svg
[python-url]: https://www.python.org/downloads/

RoboML is an aggregator package written for quickly deploying open source ML models for robots. It is designed to cover two basic use cases.

- **Readily deploy various useful models:** The package provides a wrapper around the 🤗 [**Transformers**](https://github.com/huggingface/transformers) and [**SentenceTransformers**](https://www.sbert.net/) libraries. Pretty much all relevant open source models from these libraries can be quickly deployed behind a highly scalable server endpoint.
- **Deploy Detection Models with Tracking**: With RoboML one can deploy all detection models available in [**MMDetection**](https://github.com/open-mmlab/mmdetection). An open source vision model aggregation library. These detection models can also be seemlesly used for tracking.
- **Use Open Source Vector DBs**: RoboML provides a unified interface for deploying Vector DBs along with ML models. Currently it is packaged with [**ChromaDB**](https://www.trychroma.com/) an open source multimodal vector database.
- **Aggregate robot specific ML models from the Robotics community**: RoboML aims to be an aggregator package of models trained by the robotics community. These models can range from Multimodal LLMs, vision models, or robot action models, and can be used with ROS based functional components. See the usage in [ROS Agents](https://automatika-robotics.github.io/ros-agents)

## Installation

RoboML has been tested on Ubuntu 20.04 and later. It should ideally be installed on a system with a GPU and CUDA 12.1. However, it should work without a GPU. If you encounter any installation problems, please open an issue.

`pip install roboml`

### From Source

```shell
git clone https://github.com/automatika-robotics/roboml.git && cd roboml
virtualenv venv && source venv/bin/activate
pip install pip-tools
pip install .
```

## For vision models support

If you want to utilize detection and tracking using Vision models from the MMDetection library, you will need to install a couple of dependancies as follows:

- Install RoboML using the vision flag:

  `pip install roboml[vision]`

- Install mmcv using the installation instructions provided [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html). For installation with pip, simply pick PyTorch and CUDA version that you have installed and copy the pip installation command generated. For example for PyTorch 2.1:

   `pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1/index.html`

- Install mmdetection as follows:

```shell
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
```

- If ffmpeg and libGL are missing then run the following:

`sudo apt-get update && apt-get install ffmpeg libsm6 libxext6`

### TensorRT Based Model Deployment
Vision models in RoboML can be deployed for faster inference with NVIDIA TensorRT, whenever NVIDIA GPU support is available. This deployment is currently supported for Linux based x86_64 systems. TensorRT needs to be installed in order for this feature to work. Please check out detailed install instructions [here](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).

## Model quantization support

RoboML uses [bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/index) for model quantization. However it is only installed as a dependency automatically on **x86_64** architectures as bitsandbytes pre-built wheels are not available for other architectures. For other architures, such as _aarch64_ on NVIDIA Jetson boards, it is recommended to build bitsandbytes from source using the following instructions:

```shell
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cuda -S .
make
pip install .
```
More details are available on the bitsandbytes [installation page](https://huggingface.co/docs/bitsandbytes/main/en/installation).

## Build in a Docker container (Recommended)

- Install docker desktop.
- Install [NVIDIA toolkit for Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)

```shell
git clone https://github.com/automatika-robotics/roboml.git && cd roboml
# build the container image
docker build --tag=automatika:roboml .
# for NVIDIA Jetson boards replace the above command with
docker build --tag=automatika:roboml -f Dockerfile.Jetson .
# run the container with gpu support
docker run --runtime=nvidia --gpus all --rm -p 8000:8000 automatika:roboml
```

## Servers

By default RoboML starts models as [ray serve](https://docs.ray.io/en/latest/serve/index.html) apps. Making the models scalable accross multiple infrastructure configurations. See [ray serve](https://docs.ray.io/en/latest/serve/index.html) for details.

### An Experimental Server based on RESP

When using ML models on robots, latency is a major consideration. When models are deployed on distributed infrastructure (and not on the edge, due to compute limitations), latency depends on both the model inference time and server communication time. Therefore, RoboML also implements an experimental server built using [RESP](https://github.com/antirez/RESP3) which can be accessed using any redis client. RESP is a human readable binary safe protocol, which is very simple to parse and thus can be used to implement servers significatly faster than HTTP, specially when the payloads are also packaged binary data (for example images, audio or video data). The RESP server uses msgpack a cross-platform library available in over 50 languages, to package data instead of JSON. Work on the server was inspired by earlier work of [@hansonkd](https://github.com/hansonkd) and his [Tino](https://github.com/hansonkd/Tino) project.

## Usage

To run an HTTP server simply run the following in the terminal

`roboml`

To run an RESP based server, run

`roboml-resp`

In order to see how these servers are called from a ROS package that implements its clients, please refer to [ROS Agents](https://automatika-robotics.github.io/ros-agents) documentation.

## Running Tests

To run tests, install with:

`pip install ".[dev]"`

And run the following in the root directory

`python -m pytest`

## Copyright

The code in this distribution is Copyright (c) 2024 Automatika Robotics unless explicitly indicated otherwise.

RoboML is made available under the MIT license. Details can be found in the [LICENSE](LICENSE) file.

## Contributions

ROS Agents has been developed in collaboration betweeen [Automatika Robotics](https://automatikarobotics.com/) and [Inria](https://inria.fr/). Contributions from the community are most welcome.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "roboml",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "robots, robotics, machine learning, multimodal, deep learning",
    "author": null,
    "author_email": "Automatika Robotics <contact@automatikarobotics.com>",
    "download_url": "https://files.pythonhosted.org/packages/26/0a/d4e53ec4345edaae98831338106b6299de9f195d175fa479acab30580078/roboml-0.2.3.tar.gz",
    "platform": null,
    "description": "# RoboML \ud83e\udd16\n[![PyPI][pypi-badge]][pypi-url]\n[![MIT licensed][mit-badge]][mit-url]\n[![Python Version][python-badge]][python-url]\n\n[pypi-badge]: https://img.shields.io/pypi/v/roboml.svg\n[pypi-url]: https://pypi.org/project/roboml/\n[mit-badge]: https://img.shields.io/pypi/l/roboml.svg\n[mit-url]: https://github.com/automatika-robotics/roboml/LICENSE\n[python-badge]: https://img.shields.io/pypi/pyversions/roboml.svg\n[python-url]: https://www.python.org/downloads/\n\nRoboML is an aggregator package written for quickly deploying open source ML models for robots. It is designed to cover two basic use cases.\n\n- **Readily deploy various useful models:** The package provides a wrapper around the \ud83e\udd17 [**Transformers**](https://github.com/huggingface/transformers) and [**SentenceTransformers**](https://www.sbert.net/) libraries. Pretty much all relevant open source models from these libraries can be quickly deployed behind a highly scalable server endpoint.\n- **Deploy Detection Models with Tracking**: With RoboML one can deploy all detection models available in [**MMDetection**](https://github.com/open-mmlab/mmdetection). An open source vision model aggregation library. These detection models can also be seemlesly used for tracking.\n- **Use Open Source Vector DBs**: RoboML provides a unified interface for deploying Vector DBs along with ML models. Currently it is packaged with [**ChromaDB**](https://www.trychroma.com/) an open source multimodal vector database.\n- **Aggregate robot specific ML models from the Robotics community**: RoboML aims to be an aggregator package of models trained by the robotics community. These models can range from Multimodal LLMs, vision models, or robot action models, and can be used with ROS based functional components. See the usage in [ROS Agents](https://automatika-robotics.github.io/ros-agents)\n\n## Installation\n\nRoboML has been tested on Ubuntu 20.04 and later. It should ideally be installed on a system with a GPU and CUDA 12.1. However, it should work without a GPU. If you encounter any installation problems, please open an issue.\n\n`pip install roboml`\n\n### From Source\n\n```shell\ngit clone https://github.com/automatika-robotics/roboml.git && cd roboml\nvirtualenv venv && source venv/bin/activate\npip install pip-tools\npip install .\n```\n\n## For vision models support\n\nIf you want to utilize detection and tracking using Vision models from the MMDetection library, you will need to install a couple of dependancies as follows:\n\n- Install RoboML using the vision flag:\n\n  `pip install roboml[vision]`\n\n- Install mmcv using the installation instructions provided [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html). For installation with pip, simply pick PyTorch and CUDA version that you have installed and copy the pip installation command generated. For example for PyTorch 2.1:\n\n   `pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1/index.html`\n\n- Install mmdetection as follows:\n\n```shell\ngit clone https://github.com/open-mmlab/mmdetection.git\ncd mmdetection\npip install -v -e .\n```\n\n- If ffmpeg and libGL are missing then run the following:\n\n`sudo apt-get update && apt-get install ffmpeg libsm6 libxext6`\n\n### TensorRT Based Model Deployment\nVision models in RoboML can be deployed for faster inference with NVIDIA TensorRT, whenever NVIDIA GPU support is available. This deployment is currently supported for Linux based x86_64 systems. TensorRT needs to be installed in order for this feature to work. Please check out detailed install instructions [here](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).\n\n## Model quantization support\n\nRoboML uses [bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/index) for model quantization. However it is only installed as a dependency automatically on **x86_64** architectures as bitsandbytes pre-built wheels are not available for other architectures. For other architures, such as _aarch64_ on NVIDIA Jetson boards, it is recommended to build bitsandbytes from source using the following instructions:\n\n```shell\ngit clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/\npip install -r requirements-dev.txt\ncmake -DCOMPUTE_BACKEND=cuda -S .\nmake\npip install .\n```\nMore details are available on the bitsandbytes [installation page](https://huggingface.co/docs/bitsandbytes/main/en/installation).\n\n## Build in a Docker container (Recommended)\n\n- Install docker desktop.\n- Install [NVIDIA toolkit for Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)\n\n```shell\ngit clone https://github.com/automatika-robotics/roboml.git && cd roboml\n# build the container image\ndocker build --tag=automatika:roboml .\n# for NVIDIA Jetson boards replace the above command with\ndocker build --tag=automatika:roboml -f Dockerfile.Jetson .\n# run the container with gpu support\ndocker run --runtime=nvidia --gpus all --rm -p 8000:8000 automatika:roboml\n```\n\n## Servers\n\nBy default RoboML starts models as [ray serve](https://docs.ray.io/en/latest/serve/index.html) apps. Making the models scalable accross multiple infrastructure configurations. See [ray serve](https://docs.ray.io/en/latest/serve/index.html) for details.\n\n### An Experimental Server based on RESP\n\nWhen using ML models on robots, latency is a major consideration. When models are deployed on distributed infrastructure (and not on the edge, due to compute limitations), latency depends on both the model inference time and server communication time. Therefore, RoboML also implements an experimental server built using [RESP](https://github.com/antirez/RESP3) which can be accessed using any redis client. RESP is a human readable binary safe protocol, which is very simple to parse and thus can be used to implement servers significatly faster than HTTP, specially when the payloads are also packaged binary data (for example images, audio or video data). The RESP server uses msgpack a cross-platform library available in over 50 languages, to package data instead of JSON. Work on the server was inspired by earlier work of [@hansonkd](https://github.com/hansonkd) and his [Tino](https://github.com/hansonkd/Tino) project.\n\n## Usage\n\nTo run an HTTP server simply run the following in the terminal\n\n`roboml`\n\nTo run an RESP based server, run\n\n`roboml-resp`\n\nIn order to see how these servers are called from a ROS package that implements its clients, please refer to [ROS Agents](https://automatika-robotics.github.io/ros-agents) documentation.\n\n## Running Tests\n\nTo run tests, install with:\n\n`pip install \".[dev]\"`\n\nAnd run the following in the root directory\n\n`python -m pytest`\n\n## Copyright\n\nThe code in this distribution is Copyright (c) 2024 Automatika Robotics unless explicitly indicated otherwise.\n\nRoboML is made available under the MIT license. Details can be found in the [LICENSE](LICENSE) file.\n\n## Contributions\n\nROS Agents has been developed in collaboration betweeen [Automatika Robotics](https://automatikarobotics.com/) and [Inria](https://inria.fr/). Contributions from the community are most welcome.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Machine learning models optimized for robotics experimentation and deployment",
    "version": "0.2.3",
    "project_urls": {
        "Homepage": "https://github.com/automatika-robotics/roboml"
    },
    "split_keywords": [
        "robots",
        " robotics",
        " machine learning",
        " multimodal",
        " deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7597f4138e2314a37fd649946790cd378c49929c2fad37606331a45e3677ca2f",
                "md5": "108bd1cd7635168a151f78e4a34e5802",
                "sha256": "f8180123f6ad8fff738be7ecf33af92f34534143ebbc88b7846e44e2623851ad"
            },
            "downloads": -1,
            "filename": "roboml-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "108bd1cd7635168a151f78e4a34e5802",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 32726,
            "upload_time": "2024-12-06T12:09:58",
            "upload_time_iso_8601": "2024-12-06T12:09:58.059486Z",
            "url": "https://files.pythonhosted.org/packages/75/97/f4138e2314a37fd649946790cd378c49929c2fad37606331a45e3677ca2f/roboml-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "260ad4e53ec4345edaae98831338106b6299de9f195d175fa479acab30580078",
                "md5": "a0aeb6e3abd9af03ab6f04ab96a3d16c",
                "sha256": "b9c3ce3d03c964407e3a7064557d875ffc33de04d8fe90fdb96ef8be63a25553"
            },
            "downloads": -1,
            "filename": "roboml-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "a0aeb6e3abd9af03ab6f04ab96a3d16c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 32029,
            "upload_time": "2024-12-06T12:10:00",
            "upload_time_iso_8601": "2024-12-06T12:10:00.291711Z",
            "url": "https://files.pythonhosted.org/packages/26/0a/d4e53ec4345edaae98831338106b6299de9f195d175fa479acab30580078/roboml-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-06 12:10:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "automatika-robotics",
    "github_project": "roboml",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "roboml"
}
        
Elapsed time: 0.44057s