# RoboML 🤖
RoboML is an aggregator package written for quickly deploying open source ML models for robots. It is designed to cover two basic use cases.
- **Readily deploy various useful models:** The package provides a wrapper around the 🤗 [**Transformers**](https://github.com/huggingface/transformers) and [**SentenceTransformers**](https://www.sbert.net/) libraries. Pretty much all relevant open source models from these libraries can be quickly deployed behind a highly scalable server endpoint.
- **Deploy Detection Models with Tracking**: With RoboML one can deploy all detection models available in [**MMDetection**](https://github.com/open-mmlab/mmdetection). An open source vision model aggregation library. These detection models can also be seemlesly used for tracking.
- **Use Open Source Vector DBs**: RoboML provides a unified interface for deploying Vector DBs along with ML models. Currently it is packaged with [**ChromaDB**](https://www.trychroma.com/) an open source multimodal vector database.
- **Aggregate robot specific ML models from the Robotics community**: RoboML aims to be an aggregator package of models trained by the robotics community. These models can range from Multimodal LLMs, vision models, or robot action models, and can be used with ROS based functional components. See the usage in [ROS Agents](https://automatika-robotics.github.io/ros-agents)
## Installation
RoboML has been tested on Ubuntu 20.04 and later. It should ideally be installed on a system with a GPU and CUDA 12.1. However, it should work without a GPU. If you encounter any installation problems, please open an issue.
`pip install roboml`
### From Source
```shell
git clone https://github.com/automatika-robotics/roboml.git && cd roboml
virtualenv venv && source venv/bin/activate
pip install pip-tools
pip install .
```
### For vision models support
If you want to utilize detection and tracking using Vision models from the MMDetection library, you will need to install a couple of dependancies as follows:
- Install roboml using the vision flag:
`pip install roboml[vision]`
- Install mmcv using the installation instructions provided [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html). For installation with pip, simply pick PyTorch and CUDA version that you have installed and copy the pip installation command generated. For example for PyTorch 2.1:
`pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1/index.html`
- Install mmdetection as follows:
```shell
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
```
- Install ffmpeg and libGL are missing then run the following:
`sudo apt-get update && apt-get install ffmpeg libsm6 libxext6`
### Model quantization support
Roboml uses [bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/index) for model quantization. However it is only installed as a dependency automatically on **x86_64** architectures as bitsandbytes pre-built wheels are not available for other architectures. For other architures, such as _aarch64_ on NVIDIA Jetson boards, it is recommended to build bitsandbytes from source using the following instructions:
```shell
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cuda -S .
make
pip install .
```
More details are available on the bitsandbytes [installation page](https://huggingface.co/docs/bitsandbytes/main/en/installation).
## Build in a Docker container (Recommended)
- Install docker desktop.
- Install [NVIDIA toolkit for Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
```shell
git clone https://github.com/automatika-robotics/roboml.git && cd roboml
# build the container image
docker build --tag=automatika:roboml .
# for NVIDIA Jetson boards replace the above command with
docker build --tag=automatika:roboml -f Dockerfile.Jetson .
# run the container with gpu support
docker run --runtime=nvidia --gpus all --rm -p 8000:8000 automatika:roboml
```
## Servers
By default roboml starts models as [ray serve](https://docs.ray.io/en/latest/serve/index.html) apps. Making the models scalable accross multiple infrastructure configurations. See [ray serve](https://docs.ray.io/en/latest/serve/index.html) for details.
### An Experimental Server based on RESP
When using ML models on robots, latency is a major consideration. When models are deployed on distributed infrastructure (and not on the edge, due to compute limitations), latency depends on both the model inference time and server communication time. Therefore, RoboML also implements an experimental server built using [RESP](https://github.com/antirez/RESP3) which can be accessed using any redis client. RESP is a human readable binary safe protocol, which is very simple to parse and thus can be used to implement servers significatly faster than HTTP, specially when the payloads are also packaged binary data (for example images, audio or video data). The RESP server uses msgpack a cross-platform library available in over 50 languages, to package data instead of JSON. Work on the server was inspired by earlier work of [@hansonkd](https://github.com/hansonkd) and his [Tino](https://github.com/hansonkd/Tino) project.
## Usage
To run an HTTP server simply run the following in the terminal
`roboml`
To run an RESP based server, run
`roboml-resp`
In order to see how these servers are called from a ROS package that implements its clients, please refer to [ROS Agents](https://automatika-robotics.github.io/ros-agents) documentation.
## Running Tests
To run tests, install with:
`pip install ".[dev]"`
And run the following in the root directory
`python -m pytest`
## Copyright
The code in this distribution is Copyright (c) 2024 Automatika Robotics unless explicitly indicated otherwise.
ROS Agents is made available under the MIT license. Details can be found in the [LICENSE](LICENSE) file.
## Contributions
ROS Agents has been developed in collaboration betweeen [Automatika Robotics](https://automatikarobotics.com/) and [Inria](https://inria.fr/). Contributions from the community are most welcome.
Raw data
{
"_id": null,
"home_page": null,
"name": "roboml",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "robots, robotics, machine learning, multimodal, deep learning",
"author": null,
"author_email": "Automatika Robotics <contact@automatikarobotics.com>",
"download_url": "https://files.pythonhosted.org/packages/e3/1b/176a0f8d8613cae64bb566526ef9f4e5a24b28e8f0f039adfc4b4fdb1b67/roboml-0.2.2.tar.gz",
"platform": null,
"description": "# RoboML \ud83e\udd16\n\nRoboML is an aggregator package written for quickly deploying open source ML models for robots. It is designed to cover two basic use cases.\n\n- **Readily deploy various useful models:** The package provides a wrapper around the \ud83e\udd17 [**Transformers**](https://github.com/huggingface/transformers) and [**SentenceTransformers**](https://www.sbert.net/) libraries. Pretty much all relevant open source models from these libraries can be quickly deployed behind a highly scalable server endpoint.\n- **Deploy Detection Models with Tracking**: With RoboML one can deploy all detection models available in [**MMDetection**](https://github.com/open-mmlab/mmdetection). An open source vision model aggregation library. These detection models can also be seemlesly used for tracking.\n- **Use Open Source Vector DBs**: RoboML provides a unified interface for deploying Vector DBs along with ML models. Currently it is packaged with [**ChromaDB**](https://www.trychroma.com/) an open source multimodal vector database.\n- **Aggregate robot specific ML models from the Robotics community**: RoboML aims to be an aggregator package of models trained by the robotics community. These models can range from Multimodal LLMs, vision models, or robot action models, and can be used with ROS based functional components. See the usage in [ROS Agents](https://automatika-robotics.github.io/ros-agents)\n\n## Installation\n\nRoboML has been tested on Ubuntu 20.04 and later. It should ideally be installed on a system with a GPU and CUDA 12.1. However, it should work without a GPU. If you encounter any installation problems, please open an issue.\n\n`pip install roboml`\n\n### From Source\n\n```shell\ngit clone https://github.com/automatika-robotics/roboml.git && cd roboml\nvirtualenv venv && source venv/bin/activate\npip install pip-tools\npip install .\n```\n\n### For vision models support\n\nIf you want to utilize detection and tracking using Vision models from the MMDetection library, you will need to install a couple of dependancies as follows:\n\n- Install roboml using the vision flag:\n\n `pip install roboml[vision]`\n\n- Install mmcv using the installation instructions provided [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html). For installation with pip, simply pick PyTorch and CUDA version that you have installed and copy the pip installation command generated. For example for PyTorch 2.1:\n\n `pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1/index.html`\n\n- Install mmdetection as follows:\n\n```shell\ngit clone https://github.com/open-mmlab/mmdetection.git\ncd mmdetection\npip install -v -e .\n```\n\n- Install ffmpeg and libGL are missing then run the following:\n\n`sudo apt-get update && apt-get install ffmpeg libsm6 libxext6`\n\n### Model quantization support\n\nRoboml uses [bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/index) for model quantization. However it is only installed as a dependency automatically on **x86_64** architectures as bitsandbytes pre-built wheels are not available for other architectures. For other architures, such as _aarch64_ on NVIDIA Jetson boards, it is recommended to build bitsandbytes from source using the following instructions:\n\n```shell\ngit clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/\npip install -r requirements-dev.txt\ncmake -DCOMPUTE_BACKEND=cuda -S .\nmake\npip install .\n```\nMore details are available on the bitsandbytes [installation page](https://huggingface.co/docs/bitsandbytes/main/en/installation).\n\n## Build in a Docker container (Recommended)\n\n- Install docker desktop.\n- Install [NVIDIA toolkit for Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)\n\n```shell\ngit clone https://github.com/automatika-robotics/roboml.git && cd roboml\n# build the container image\ndocker build --tag=automatika:roboml .\n# for NVIDIA Jetson boards replace the above command with\ndocker build --tag=automatika:roboml -f Dockerfile.Jetson .\n# run the container with gpu support\ndocker run --runtime=nvidia --gpus all --rm -p 8000:8000 automatika:roboml\n```\n\n## Servers\n\nBy default roboml starts models as [ray serve](https://docs.ray.io/en/latest/serve/index.html) apps. Making the models scalable accross multiple infrastructure configurations. See [ray serve](https://docs.ray.io/en/latest/serve/index.html) for details.\n\n### An Experimental Server based on RESP\n\nWhen using ML models on robots, latency is a major consideration. When models are deployed on distributed infrastructure (and not on the edge, due to compute limitations), latency depends on both the model inference time and server communication time. Therefore, RoboML also implements an experimental server built using [RESP](https://github.com/antirez/RESP3) which can be accessed using any redis client. RESP is a human readable binary safe protocol, which is very simple to parse and thus can be used to implement servers significatly faster than HTTP, specially when the payloads are also packaged binary data (for example images, audio or video data). The RESP server uses msgpack a cross-platform library available in over 50 languages, to package data instead of JSON. Work on the server was inspired by earlier work of [@hansonkd](https://github.com/hansonkd) and his [Tino](https://github.com/hansonkd/Tino) project.\n\n## Usage\n\nTo run an HTTP server simply run the following in the terminal\n\n`roboml`\n\nTo run an RESP based server, run\n\n`roboml-resp`\n\nIn order to see how these servers are called from a ROS package that implements its clients, please refer to [ROS Agents](https://automatika-robotics.github.io/ros-agents) documentation.\n\n## Running Tests\n\nTo run tests, install with:\n\n`pip install \".[dev]\"`\n\nAnd run the following in the root directory\n\n`python -m pytest`\n\n## Copyright\n\nThe code in this distribution is Copyright (c) 2024 Automatika Robotics unless explicitly indicated otherwise.\n\nROS Agents is made available under the MIT license. Details can be found in the [LICENSE](LICENSE) file.\n\n## Contributions\n\nROS Agents has been developed in collaboration betweeen [Automatika Robotics](https://automatikarobotics.com/) and [Inria](https://inria.fr/). Contributions from the community are most welcome.\n",
"bugtrack_url": null,
"license": null,
"summary": "Machine learning models optimized for robotics experimentation and deployment",
"version": "0.2.2",
"project_urls": {
"Homepage": "https://github.com/automatika-robotics/roboml"
},
"split_keywords": [
"robots",
" robotics",
" machine learning",
" multimodal",
" deep learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7a7720564df60ea64ec79c4cb9e8f25083490b1befadd7d79a6a766aecb8b6a6",
"md5": "d56d1b7946908eb12b39d01049a4b1a8",
"sha256": "a50a3b8356e153970b7912d7b732f8ee0c3c6ccd5e6bc33fff7de2ce46e6583c"
},
"downloads": -1,
"filename": "roboml-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d56d1b7946908eb12b39d01049a4b1a8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 30452,
"upload_time": "2024-11-15T13:06:11",
"upload_time_iso_8601": "2024-11-15T13:06:11.983168Z",
"url": "https://files.pythonhosted.org/packages/7a/77/20564df60ea64ec79c4cb9e8f25083490b1befadd7d79a6a766aecb8b6a6/roboml-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e31b176a0f8d8613cae64bb566526ef9f4e5a24b28e8f0f039adfc4b4fdb1b67",
"md5": "cc2d8c39a0ee5e0c21d82d924a9c2118",
"sha256": "a87f40f89ce254126ffa553db4274648d9925b84d43ff0b4e205e706a062c045"
},
"downloads": -1,
"filename": "roboml-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "cc2d8c39a0ee5e0c21d82d924a9c2118",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 29561,
"upload_time": "2024-11-15T13:06:13",
"upload_time_iso_8601": "2024-11-15T13:06:13.959630Z",
"url": "https://files.pythonhosted.org/packages/e3/1b/176a0f8d8613cae64bb566526ef9f4e5a24b28e8f0f039adfc4b4fdb1b67/roboml-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-15 13:06:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "automatika-robotics",
"github_project": "roboml",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "roboml"
}