roboml


Nameroboml JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryMachine learning models optimized for robotics experimentation and deployment
upload_time2024-10-01 04:43:38
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords robots robotics machine learning multimodal deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RoboML 🤖

RoboML is an aggregator package written for quickly deploying open source ML models for robots. It is designed to cover two basic use cases.

- __Readily deploy various useful models:__ The package provides a wrapper around the 🤗 [**Transformers**](https://github.com/huggingface/transformers) and [**SentenceTransformers**](https://www.sbert.net/) libraries. Pretty much all relevant open source models from these libraries can be quickly deployed behind a highly scalable server endpoint.
- __Deploy Detection Models with Tracking__: With RoboML one can deploy all detection models available in [**MMDetection**](https://github.com/open-mmlab/mmdetection). An open source vision model aggregation library. These detection models can also be seemlesly used for tracking.
- __Use Open Source Vector DBs__: RoboML provides a unified interface for deploying Vector DBs along with ML models. Currently it is packaged with [**ChromaDB**](https://www.trychroma.com/) an open source multimodal vector database.
- __Aggregate robot specific ML models from the Robotics community__: RoboML aims to be an aggregator package of models trained by the robotics community. These models can range from Multimodal LLMs, vision models, or robot action models, and can be used with ROS based functional components. See the usage in [ROS Agents](https://automatika-robotics.github.io/ros-agents)

## Installation

RoboML has been tested on Ubuntu 20.04 and later. It should ideally be installed on a system with a GPU and CUDA 12.1. However, it should work without a GPU. If you encounter any installation problems, please open an issue.

### From Source

```shell
git clone https://github.com/automatika-robotics/roboml.git && cd roboml
virtualenv venv && source venv/bin/activate
pip install pip-tools
pip install .
```

### For vision model support

If you want to utilize detection and tracking using Vision models from the MMDetection library, you will need to install a couple of dependancies as follows:

- Install roboml using the vision flag:
`pip install roboml[vision]`
- Install mmcv using the installation instructions provided [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html). For installation with pip, simply pick PyTorch and CUDA version that you have installed and copy the pip installation command generated.
- Install mmdetection as follows:
```shell
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
```

## Build in a Docker container (Recommended)

- Install docker desktop.
- Install [NVIDIA toolkit for Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
```shell
git clone https://github.com/automatika-robotics/roboml.git && cd roboml
docker build --tag=automatika:roboml .
docker run --runtime=nvidia --gpus all --rm -p 8000:8000 automatika:roboml
```

## Servers

By default roboml starts models as [ray serve](https://docs.ray.io/en/latest/serve/index.html) apps. Making the models scalable accross multiple infrastructure configurations. See [ray serve](https://docs.ray.io/en/latest/serve/index.html) for details.

### An Experimental Server based on RESP

When using ML models on robots, latency is a major consideration. When models are deployed on distributed infrastructure (and not on the edge, due to compute limitations), latency depends on both the model inference time and server communication time. Therefore, RoboML also implements an experimental server built using [RESP](https://github.com/antirez/RESP3) which can be accessed using any redis client. RESP is a human readable binary safe protocol, which is very simple to parse and thus can be used to implement servers significatly faster than HTTP, specially when the payloads are also packaged binary data (for example images, audio or video data). The RESP server uses msgpack a cross-platform library available in over 50 languages, to package data instead of JSON. Work on the server was inspired by earlier work of [@hansonkd](https://github.com/hansonkd) and his [Tino](https://github.com/hansonkd/Tino) project.

## Usage

To run an HTTP server simply run the following in the terminal

`roboml`

To run an RESP based server, run

`roboml-resp`

In order to see how these servers are called from a ROS package that implements its clients, please refer to [ROS Agents](https://automatika-robotics.github.io/ros-agents) documentation.

## Running Tests

To run tests, install with:

`pip install ".[dev]"`

And run the following in the root directory

`python -m pytest`

## Copyright

The code in this distribution is Copyright (c) 2024 Automatika Robotics unless explicitly indicated otherwise.

ROS Agents is made available under the MIT license. Details can be found in the [LICENSE](LICENSE) file.

## Contributions

ROS Agents has been developed in collaboration betweeen [Automatika Robotics](https://automatikarobotics.com/) and [Inria](https://inria.fr/). Contributions from the community are most welcome.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "roboml",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "robots, robotics, machine learning, multimodal, deep learning",
    "author": null,
    "author_email": "Automatika Robotics <contact@automatikarobotics.com>",
    "download_url": "https://files.pythonhosted.org/packages/de/76/d6fef1bc1dee79efcf0801fd1fed24ae893ea25a951d319109a115ce9d99/roboml-0.2.0.tar.gz",
    "platform": null,
    "description": "# RoboML \ud83e\udd16\n\nRoboML is an aggregator package written for quickly deploying open source ML models for robots. It is designed to cover two basic use cases.\n\n- __Readily deploy various useful models:__ The package provides a wrapper around the \ud83e\udd17 [**Transformers**](https://github.com/huggingface/transformers) and [**SentenceTransformers**](https://www.sbert.net/) libraries. Pretty much all relevant open source models from these libraries can be quickly deployed behind a highly scalable server endpoint.\n- __Deploy Detection Models with Tracking__: With RoboML one can deploy all detection models available in [**MMDetection**](https://github.com/open-mmlab/mmdetection). An open source vision model aggregation library. These detection models can also be seemlesly used for tracking.\n- __Use Open Source Vector DBs__: RoboML provides a unified interface for deploying Vector DBs along with ML models. Currently it is packaged with [**ChromaDB**](https://www.trychroma.com/) an open source multimodal vector database.\n- __Aggregate robot specific ML models from the Robotics community__: RoboML aims to be an aggregator package of models trained by the robotics community. These models can range from Multimodal LLMs, vision models, or robot action models, and can be used with ROS based functional components. See the usage in [ROS Agents](https://automatika-robotics.github.io/ros-agents)\n\n## Installation\n\nRoboML has been tested on Ubuntu 20.04 and later. It should ideally be installed on a system with a GPU and CUDA 12.1. However, it should work without a GPU. If you encounter any installation problems, please open an issue.\n\n### From Source\n\n```shell\ngit clone https://github.com/automatika-robotics/roboml.git && cd roboml\nvirtualenv venv && source venv/bin/activate\npip install pip-tools\npip install .\n```\n\n### For vision model support\n\nIf you want to utilize detection and tracking using Vision models from the MMDetection library, you will need to install a couple of dependancies as follows:\n\n- Install roboml using the vision flag:\n`pip install roboml[vision]`\n- Install mmcv using the installation instructions provided [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html). For installation with pip, simply pick PyTorch and CUDA version that you have installed and copy the pip installation command generated.\n- Install mmdetection as follows:\n```shell\ngit clone https://github.com/open-mmlab/mmdetection.git\ncd mmdetection\npip install -v -e .\n```\n\n## Build in a Docker container (Recommended)\n\n- Install docker desktop.\n- Install [NVIDIA toolkit for Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)\n```shell\ngit clone https://github.com/automatika-robotics/roboml.git && cd roboml\ndocker build --tag=automatika:roboml .\ndocker run --runtime=nvidia --gpus all --rm -p 8000:8000 automatika:roboml\n```\n\n## Servers\n\nBy default roboml starts models as [ray serve](https://docs.ray.io/en/latest/serve/index.html) apps. Making the models scalable accross multiple infrastructure configurations. See [ray serve](https://docs.ray.io/en/latest/serve/index.html) for details.\n\n### An Experimental Server based on RESP\n\nWhen using ML models on robots, latency is a major consideration. When models are deployed on distributed infrastructure (and not on the edge, due to compute limitations), latency depends on both the model inference time and server communication time. Therefore, RoboML also implements an experimental server built using [RESP](https://github.com/antirez/RESP3) which can be accessed using any redis client. RESP is a human readable binary safe protocol, which is very simple to parse and thus can be used to implement servers significatly faster than HTTP, specially when the payloads are also packaged binary data (for example images, audio or video data). The RESP server uses msgpack a cross-platform library available in over 50 languages, to package data instead of JSON. Work on the server was inspired by earlier work of [@hansonkd](https://github.com/hansonkd) and his [Tino](https://github.com/hansonkd/Tino) project.\n\n## Usage\n\nTo run an HTTP server simply run the following in the terminal\n\n`roboml`\n\nTo run an RESP based server, run\n\n`roboml-resp`\n\nIn order to see how these servers are called from a ROS package that implements its clients, please refer to [ROS Agents](https://automatika-robotics.github.io/ros-agents) documentation.\n\n## Running Tests\n\nTo run tests, install with:\n\n`pip install \".[dev]\"`\n\nAnd run the following in the root directory\n\n`python -m pytest`\n\n## Copyright\n\nThe code in this distribution is Copyright (c) 2024 Automatika Robotics unless explicitly indicated otherwise.\n\nROS Agents is made available under the MIT license. Details can be found in the [LICENSE](LICENSE) file.\n\n## Contributions\n\nROS Agents has been developed in collaboration betweeen [Automatika Robotics](https://automatikarobotics.com/) and [Inria](https://inria.fr/). Contributions from the community are most welcome.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Machine learning models optimized for robotics experimentation and deployment",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/automatika-robotics/roboml"
    },
    "split_keywords": [
        "robots",
        " robotics",
        " machine learning",
        " multimodal",
        " deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "27f8fdfafa93efc93b7bf64e03925ec5a297a45b0c40a25e77e9d12705c169ab",
                "md5": "39e34771ccd3a381c1b77892ccc770aa",
                "sha256": "3eb35a47c24595666449e870057b593c893dac20a13ac8ea6ac77790e2709f68"
            },
            "downloads": -1,
            "filename": "roboml-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "39e34771ccd3a381c1b77892ccc770aa",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 30069,
            "upload_time": "2024-10-01T04:43:36",
            "upload_time_iso_8601": "2024-10-01T04:43:36.718759Z",
            "url": "https://files.pythonhosted.org/packages/27/f8/fdfafa93efc93b7bf64e03925ec5a297a45b0c40a25e77e9d12705c169ab/roboml-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "de76d6fef1bc1dee79efcf0801fd1fed24ae893ea25a951d319109a115ce9d99",
                "md5": "f11e0a08adba3a9f8be744849433e8b7",
                "sha256": "332e50740a0fe2b0126b6dc81e61235e51f17a00f2a998f909029052382d6956"
            },
            "downloads": -1,
            "filename": "roboml-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "f11e0a08adba3a9f8be744849433e8b7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 28692,
            "upload_time": "2024-10-01T04:43:38",
            "upload_time_iso_8601": "2024-10-01T04:43:38.137552Z",
            "url": "https://files.pythonhosted.org/packages/de/76/d6fef1bc1dee79efcf0801fd1fed24ae893ea25a951d319109a115ce9d99/roboml-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-01 04:43:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "automatika-robotics",
    "github_project": "roboml",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "roboml"
}
        
Elapsed time: 0.36294s