rtx-torch


Namertx-torch JSON
Version 0.1.3 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/rt-x
Summaryrtx - Pytorch
upload_time2024-02-05 07:21:49
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.9,<3.12
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models".

Here we implement both model architectures, RTX-1 and RTX-2

[Paper Link](https://robotics-transformer-x.github.io/)

- The RTX-2 Implementation does not natively output for simplicity a 7 dimensional vector but rather text tokens, if you wanted to output 7 dimensional vector you could implement the same token learner as in RTX1


# Appreciation
* Lucidrains
* Agorians

# Install
`pip install rtx-torch `

# Usage
To see detailed usage, run `python run.py --help`.
## RTX1
- RTX1 Usage takes in text and videos
- Does not use Efficient Net yet, we're integrating it now then the implementation will be complete
- Uses SOTA transformer architecture

```python

import torch
from rtx.rtx1 import RTX1, FilmViTConfig

# Use a pre-trained MaxVit model from pytorch
model = RTX1(film_vit_config=FilmViTConfig(pretrained=pretrained))

video = torch.randn(2, 3, 6, 224, 224)

instructions = ["bring me that apple sitting on the table", "please pass the butter"]

# compute the train logits
train_logits = model.train(video, instructions)

# set the model to evaluation mode
model.model.eval()

# compute the eval logits with a conditional scale of 3
eval_logits = model.run(video, instructions, cond_scale=3.0)
print(eval_logits.shape)
```


## RTX-2
- RTX-2 takes in images and text and interleaves them to form multi-modal sentences and outputs text tokens not a 7 dimensional vector of x,y,z,roll,pitch,yaw,and gripper
```python

import torch
from rtx import RTX2

# usage
img = torch.randn(1, 3, 256, 256)
text = torch.randint(0, 20000, (1, 1024))

model = RTX2()
output = model(img, text)
print(output)

```

## EfficientNetFilm
- Extracts the feature from the given image
```python
from rtx import EfficientNetFilm

model = EfficientNetFilm("efficientnet-b0", 10)

out = model("img.jpeg")


```
# Model Differences from the Paper Implementation
## RT-1
The main difference here is the substitution of a Film-EfficientNet backbone (pre-trained EfficientNet-B3 with Film layers inserted) with a MaxViT model.



# Tests
I created a single tests file that uses pytest to run tests on all the modules, RTX1, RTX2, EfficientNetFil, first git clone and get into the repository, install the requirements.txt with pip then run this:

`python -m pytest tests/tests.py`

# License
MIT

# Citations
```bibtex
@misc{open_x_embodiment_rt_x_2023,
title={Open {X-E}mbodiment: Robotic Learning Datasets and {RT-X} Models},
author = {Open X-Embodiment Collaboration and Abhishek Padalkar and Acorn Pooley and Ajinkya Jain and Alex Bewley and Alex Herzog and Alex Irpan and Alexander Khazatsky and Anant Rai and Anikait Singh and Anthony Brohan and Antonin Raffin and Ayzaan Wahid and Ben Burgess-Limerick and Beomjoon Kim and Bernhard Schölkopf and Brian Ichter and Cewu Lu and Charles Xu and Chelsea Finn and Chenfeng Xu and Cheng Chi and Chenguang Huang and Christine Chan and Chuer Pan and Chuyuan Fu and Coline Devin and Danny Driess and Deepak Pathak and Dhruv Shah and Dieter Büchler and Dmitry Kalashnikov and Dorsa Sadigh and Edward Johns and Federico Ceola and Fei Xia and Freek Stulp and Gaoyue Zhou and Gaurav S. Sukhatme and Gautam Salhotra and Ge Yan and Giulio Schiavi and Hao Su and Hao-Shu Fang and Haochen Shi and Heni Ben Amor and Henrik I Christensen and Hiroki Furuta and Homer Walke and Hongjie Fang and Igor Mordatch and Ilija Radosavovic and Isabel Leal and Jacky Liang and Jaehyung Kim and Jan Schneider and Jasmine Hsu and Jeannette Bohg and Jeffrey Bingham and Jiajun Wu and Jialin Wu and Jianlan Luo and Jiayuan Gu and Jie Tan and Jihoon Oh and Jitendra Malik and Jonathan Tompson and Jonathan Yang and Joseph J. Lim and João Silvério and Junhyek Han and Kanishka Rao and Karl Pertsch and Karol Hausman and Keegan Go and Keerthana Gopalakrishnan and Ken Goldberg and Kendra Byrne and Kenneth Oslund and Kento Kawaharazuka and Kevin Zhang and Keyvan Majd and Krishan Rana and Krishnan Srinivasan and Lawrence Yunliang Chen and Lerrel Pinto and Liam Tan and Lionel Ott and Lisa Lee and Masayoshi Tomizuka and Maximilian Du and Michael Ahn and Mingtong Zhang and Mingyu Ding and Mohan Kumar Srirama and Mohit Sharma and Moo Jin Kim and Naoaki Kanazawa and Nicklas Hansen and Nicolas Heess and Nikhil J Joshi and Niko Suenderhauf and Norman Di Palo and Nur Muhammad Mahi Shafiullah and Oier Mees and Oliver Kroemer and Pannag R Sanketi and Paul Wohlhart and Peng Xu and Pierre Sermanet and Priya Sundaresan and Quan Vuong and Rafael Rafailov and Ran Tian and Ria Doshi and Roberto Martín-Martín and Russell Mendonca and Rutav Shah and Ryan Hoque and Ryan Julian and Samuel Bustamante and Sean Kirmani and Sergey Levine and Sherry Moore and Shikhar Bahl and Shivin Dass and Shuran Song and Sichun Xu and Siddhant Haldar and Simeon Adebola and Simon Guist and Soroush Nasiriany and Stefan Schaal and Stefan Welker and Stephen Tian and Sudeep Dasari and Suneel Belkhale and Takayuki Osa and Tatsuya Harada and Tatsuya Matsushima and Ted Xiao and Tianhe Yu and Tianli Ding and Todor Davchev and Tony Z. Zhao and Travis Armstrong and Trevor Darrell and Vidhi Jain and Vincent Vanhoucke and Wei Zhan and Wenxuan Zhou and Wolfram Burgard and Xi Chen and Xiaolong Wang and Xinghao Zhu and Xuanlin Li and Yao Lu and Yevgen Chebotar and Yifan Zhou and Yifeng Zhu and Ying Xu and Yixuan Wang and Yonatan Bisk and Yoonyoung Cho and Youngwoon Lee and Yuchen Cui and Yueh-hua Wu and Yujin Tang and Yuke Zhu and Yunzhu Li and Yusuke Iwasawa and Yutaka Matsuo and Zhuo Xu and Zichen Jeff Cui},
howpublished  = {\url{https://arxiv.org/abs/2310.08864}},
year = {2023},
}
```

# Todo
- Integrate EfficientNetFilm with RTX-1
- Create training script for RTX-1 by unrolling observations and do basic cross entropy in first rt-1
- Use RTX-2 dataset on huggingface
- [Check out the project board for more tasks](https://github.com/users/kyegomez/projects/10/views/1)
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/rt-x",
    "name": "rtx-torch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9,<3.12",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/8f/59/3427fc17c526231524b24cf9f1f3c2b220c3c185d85a2fff6c707242d679/rtx_torch-0.1.3.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# RT-X\nPytorch implementation of the models RT-1-X and RT-2-X from the paper: \"Open X-Embodiment: Robotic Learning Datasets and RT-X Models\".\n\nHere we implement both model architectures, RTX-1 and RTX-2\n\n[Paper Link](https://robotics-transformer-x.github.io/)\n\n- The RTX-2 Implementation does not natively output for simplicity a 7 dimensional vector but rather text tokens, if you wanted to output 7 dimensional vector you could implement the same token learner as in RTX1\n\n\n# Appreciation\n* Lucidrains\n* Agorians\n\n# Install\n`pip install rtx-torch `\n\n# Usage\nTo see detailed usage, run `python run.py --help`.\n## RTX1\n- RTX1 Usage takes in text and videos\n- Does not use Efficient Net yet, we're integrating it now then the implementation will be complete\n- Uses SOTA transformer architecture\n\n```python\n\nimport torch\nfrom rtx.rtx1 import RTX1, FilmViTConfig\n\n# Use a pre-trained MaxVit model from pytorch\nmodel = RTX1(film_vit_config=FilmViTConfig(pretrained=pretrained))\n\nvideo = torch.randn(2, 3, 6, 224, 224)\n\ninstructions = [\"bring me that apple sitting on the table\", \"please pass the butter\"]\n\n# compute the train logits\ntrain_logits = model.train(video, instructions)\n\n# set the model to evaluation mode\nmodel.model.eval()\n\n# compute the eval logits with a conditional scale of 3\neval_logits = model.run(video, instructions, cond_scale=3.0)\nprint(eval_logits.shape)\n```\n\n\n## RTX-2\n- RTX-2 takes in images and text and interleaves them to form multi-modal sentences and outputs text tokens not a 7 dimensional vector of x,y,z,roll,pitch,yaw,and gripper\n```python\n\nimport torch\nfrom rtx import RTX2\n\n# usage\nimg = torch.randn(1, 3, 256, 256)\ntext = torch.randint(0, 20000, (1, 1024))\n\nmodel = RTX2()\noutput = model(img, text)\nprint(output)\n\n```\n\n## EfficientNetFilm\n- Extracts the feature from the given image\n```python\nfrom rtx import EfficientNetFilm\n\nmodel = EfficientNetFilm(\"efficientnet-b0\", 10)\n\nout = model(\"img.jpeg\")\n\n\n```\n# Model Differences from the Paper Implementation\n## RT-1\nThe main difference here is the substitution of a Film-EfficientNet backbone (pre-trained EfficientNet-B3 with Film layers inserted) with a MaxViT model.\n\n\n\n# Tests\nI created a single tests file that uses pytest to run tests on all the modules, RTX1, RTX2, EfficientNetFil, first git clone and get into the repository, install the requirements.txt with pip then run this:\n\n`python -m pytest tests/tests.py`\n\n# License\nMIT\n\n# Citations\n```bibtex\n@misc{open_x_embodiment_rt_x_2023,\ntitle={Open {X-E}mbodiment: Robotic Learning Datasets and {RT-X} Models},\nauthor = {Open X-Embodiment Collaboration and Abhishek Padalkar and Acorn Pooley and Ajinkya Jain and Alex Bewley and Alex Herzog and Alex Irpan and Alexander Khazatsky and Anant Rai and Anikait Singh and Anthony Brohan and Antonin Raffin and Ayzaan Wahid and Ben Burgess-Limerick and Beomjoon Kim and Bernhard Sch\u00f6lkopf and Brian Ichter and Cewu Lu and Charles Xu and Chelsea Finn and Chenfeng Xu and Cheng Chi and Chenguang Huang and Christine Chan and Chuer Pan and Chuyuan Fu and Coline Devin and Danny Driess and Deepak Pathak and Dhruv Shah and Dieter B\u00fcchler and Dmitry Kalashnikov and Dorsa Sadigh and Edward Johns and Federico Ceola and Fei Xia and Freek Stulp and Gaoyue Zhou and Gaurav S. Sukhatme and Gautam Salhotra and Ge Yan and Giulio Schiavi and Hao Su and Hao-Shu Fang and Haochen Shi and Heni Ben Amor and Henrik I Christensen and Hiroki Furuta and Homer Walke and Hongjie Fang and Igor Mordatch and Ilija Radosavovic and Isabel Leal and Jacky Liang and Jaehyung Kim and Jan Schneider and Jasmine Hsu and Jeannette Bohg and Jeffrey Bingham and Jiajun Wu and Jialin Wu and Jianlan Luo and Jiayuan Gu and Jie Tan and Jihoon Oh and Jitendra Malik and Jonathan Tompson and Jonathan Yang and Joseph J. Lim and Jo\u00e3o Silv\u00e9rio and Junhyek Han and Kanishka Rao and Karl Pertsch and Karol Hausman and Keegan Go and Keerthana Gopalakrishnan and Ken Goldberg and Kendra Byrne and Kenneth Oslund and Kento Kawaharazuka and Kevin Zhang and Keyvan Majd and Krishan Rana and Krishnan Srinivasan and Lawrence Yunliang Chen and Lerrel Pinto and Liam Tan and Lionel Ott and Lisa Lee and Masayoshi Tomizuka and Maximilian Du and Michael Ahn and Mingtong Zhang and Mingyu Ding and Mohan Kumar Srirama and Mohit Sharma and Moo Jin Kim and Naoaki Kanazawa and Nicklas Hansen and Nicolas Heess and Nikhil J Joshi and Niko Suenderhauf and Norman Di Palo and Nur Muhammad Mahi Shafiullah and Oier Mees and Oliver Kroemer and Pannag R Sanketi and Paul Wohlhart and Peng Xu and Pierre Sermanet and Priya Sundaresan and Quan Vuong and Rafael Rafailov and Ran Tian and Ria Doshi and Roberto Mart\u00edn-Mart\u00edn and Russell Mendonca and Rutav Shah and Ryan Hoque and Ryan Julian and Samuel Bustamante and Sean Kirmani and Sergey Levine and Sherry Moore and Shikhar Bahl and Shivin Dass and Shuran Song and Sichun Xu and Siddhant Haldar and Simeon Adebola and Simon Guist and Soroush Nasiriany and Stefan Schaal and Stefan Welker and Stephen Tian and Sudeep Dasari and Suneel Belkhale and Takayuki Osa and Tatsuya Harada and Tatsuya Matsushima and Ted Xiao and Tianhe Yu and Tianli Ding and Todor Davchev and Tony Z. Zhao and Travis Armstrong and Trevor Darrell and Vidhi Jain and Vincent Vanhoucke and Wei Zhan and Wenxuan Zhou and Wolfram Burgard and Xi Chen and Xiaolong Wang and Xinghao Zhu and Xuanlin Li and Yao Lu and Yevgen Chebotar and Yifan Zhou and Yifeng Zhu and Ying Xu and Yixuan Wang and Yonatan Bisk and Yoonyoung Cho and Youngwoon Lee and Yuchen Cui and Yueh-hua Wu and Yujin Tang and Yuke Zhu and Yunzhu Li and Yusuke Iwasawa and Yutaka Matsuo and Zhuo Xu and Zichen Jeff Cui},\nhowpublished  = {\\url{https://arxiv.org/abs/2310.08864}},\nyear = {2023},\n}\n```\n\n# Todo\n- Integrate EfficientNetFilm with RTX-1\n- Create training script for RTX-1 by unrolling observations and do basic cross entropy in first rt-1\n- Use RTX-2 dataset on huggingface\n- [Check out the project board for more tasks](https://github.com/users/kyegomez/projects/10/views/1)",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "rtx - Pytorch",
    "version": "0.1.3",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/rt-x",
        "Homepage": "https://github.com/kyegomez/rt-x",
        "Repository": "https://github.com/kyegomez/rtx"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "700937d40e99556db800b0d32c7f60e4c017b4c4299cd036286e2fe791be852a",
                "md5": "2446de103fcd7580c3ccf08170cd8099",
                "sha256": "a957509fd0df84155139f7906c38c3afd9284e93c4bf3d29984597d4cba6b425"
            },
            "downloads": -1,
            "filename": "rtx_torch-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2446de103fcd7580c3ccf08170cd8099",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9,<3.12",
            "size": 15715,
            "upload_time": "2024-02-05T07:21:47",
            "upload_time_iso_8601": "2024-02-05T07:21:47.941067Z",
            "url": "https://files.pythonhosted.org/packages/70/09/37d40e99556db800b0d32c7f60e4c017b4c4299cd036286e2fe791be852a/rtx_torch-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8f593427fc17c526231524b24cf9f1f3c2b220c3c185d85a2fff6c707242d679",
                "md5": "5955fef9cc20c2537f3c9e6129995c08",
                "sha256": "707ffbdff0f3f947d8de2795792ec9d9daed43eb09d03806cb0dbb44fafc67c9"
            },
            "downloads": -1,
            "filename": "rtx_torch-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "5955fef9cc20c2537f3c9e6129995c08",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9,<3.12",
            "size": 18164,
            "upload_time": "2024-02-05T07:21:49",
            "upload_time_iso_8601": "2024-02-05T07:21:49.725820Z",
            "url": "https://files.pythonhosted.org/packages/8f/59/3427fc17c526231524b24cf9f1f3c2b220c3c185d85a2fff6c707242d679/rtx_torch-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-05 07:21:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "rt-x",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "rtx-torch"
}
        
Elapsed time: 0.21789s