jetson-examples


Namejetson-examples JSON
Version 0.0.7 PyPI version JSON
download
home_pageNone
SummaryRunning Gen AI models and applications on NVIDIA Jetson devices with one-line command
upload_time2024-04-19 06:18:35
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords llama llava gpt llm nvidia jetson multimodal jetson orin
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <img alt="jetson" width="1200px" src="https://files.seeedstudio.com/wiki/reComputer-Jetson/jetson-examples/Jetson1200x300.png">
</div>

# jetson-examples

[![Discord](https://dcbadge.vercel.app/api/server/5BQCkty7vN?style=flat&compact=true)](https://discord.gg/5BQCkty7vN)

This repository provides examples for running AI models and applications on NVIDIA Jetson devices.  For generative AI, it supports a variety of examples including text generation, image generation, vision transformers, vector databases, and audio models.
To run the examples, you need to install the jetson-examples package and use the Seeed Studio [reComputer](https://www.seeedstudio.com/reComputer-J4012-p-5586.html), the edge AI device powered by Jetson Orin.  The repo aims to make it easy to deploy state-of-the-art AI models, with just one line of command, on Jetson devices for tasks like language understanding, computer vision, and multimodal processing.

This repo builds upon the work of the [Jetson Containers](https://github.com/dusty-nv/jetson-containers), which provides a modular container build system for various AI/ML packages on NVIDIA Jetson devices. It also leverages resources and tutorials from the [Jetson Generative AI Lab](https://www.jetson-ai-lab.com/index.html), which showcases bringing generative AI to the edge, powered by Jetson hardware.

## Install

```sh
pip install jetson-examples
```

- [more installation methods](./docs/install.md)
- If you have already installed, you can use `pip install jetson-examples --upgrade` to update.

## Quickstart

To run and chat with [LLaVA](https://www.jetson-ai-lab.com/tutorial_llava.html):

```sh
reComputer run llava
```

## Example list

reComputer supports a list of examples from [jetson-ai-lab](https://www.jetson-ai-lab.com/)

Here are some examples that can be run:

| Example                                          | Type                     | Model/Data Size | Image Size | Command                                 |
| ------------------------------------------------ | ------------------------ | --------------- | ---------- | --------------------------------------- |
| 🆕 llama3                                         | Text (LLM)               | 4.9GB           | 10.5GB     | `reComputer run llama3`                 |
| 🆕 [ollama](https://github.com/ollama/ollama)     | Inference Server         | *               | 10.5GB     | `reComputer run ollama`                 |
| LLaVA                                            | Text + Vision (VLM)      | 13GB            | 14.4GB     | `reComputer run llava`                  |
| Live LLaVA                                       | Text + Vision (VLM)      | 13GB            | 20.3GB     | `reComputer run live-llava`             |
| stable-diffusion-webui                           | Image Generation         | 3.97G           | 7.3GB      | `reComputer run stable-diffusion-webui` |
| nanoowl                                          | Vision Transformers(ViT) | 613MB           | 15.1GB     | `reComputer run nanoowl`                |
| [nanodb](../reComputer/scripts/nanodb/readme.md) | Vector Database          | 76GB            | 7.0GB      | `reComputer run nanodb`                 |
| whisper                                          | Audio                    | 1.5GB           | 6.0GB      | `reComputer run whisper`                |

> Note: You should have enough space to run example, like `LLaVA`, at least `27.4GB` totally

More Examples can be found [examples.md](./docs/examples.md)

Want to add a Example by yourself? Check this [develop.md](./docs/develop.md)

## TODO List

- [ ] check disk space enough or not before run
- [ ] allow to setting some configs, such as `BASE_PATH`
- [ ] detect host environment and install what we need
- [ ] support jetson-containers update
- [ ] all type jetson support checking list
- [ ] better table to show example's difference
- [ ] try jetpack 6.0

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "jetson-examples",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llama, llava, gpt, llm, nvidia, jetson, multimodal, jetson orin",
    "author": null,
    "author_email": "luozhixin <zhixin.luo@seeed.cc>",
    "download_url": "https://files.pythonhosted.org/packages/b6/eb/5255986dfbc0aca8fce9dcff44a32114d41f710111dcfb9e10e1d4436a69/jetson_examples-0.0.7.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <img alt=\"jetson\" width=\"1200px\" src=\"https://files.seeedstudio.com/wiki/reComputer-Jetson/jetson-examples/Jetson1200x300.png\">\n</div>\n\n# jetson-examples\n\n[![Discord](https://dcbadge.vercel.app/api/server/5BQCkty7vN?style=flat&compact=true)](https://discord.gg/5BQCkty7vN)\n\nThis repository provides examples for running AI models and applications on NVIDIA Jetson devices.  For generative AI, it supports a variety of examples including text generation, image generation, vision transformers, vector databases, and audio models.\nTo run the examples, you need to install the jetson-examples package and use the Seeed Studio [reComputer](https://www.seeedstudio.com/reComputer-J4012-p-5586.html), the edge AI device powered by Jetson Orin.  The repo aims to make it easy to deploy state-of-the-art AI models, with just one line of command, on Jetson devices for tasks like language understanding, computer vision, and multimodal processing.\n\nThis repo builds upon the work of the [Jetson Containers](https://github.com/dusty-nv/jetson-containers), which provides a modular container build system for various AI/ML packages on NVIDIA Jetson devices. It also leverages resources and tutorials from the [Jetson Generative AI Lab](https://www.jetson-ai-lab.com/index.html), which showcases bringing generative AI to the edge, powered by Jetson hardware.\n\n## Install\n\n```sh\npip install jetson-examples\n```\n\n- [more installation methods](./docs/install.md)\n- If you have already installed, you can use `pip install jetson-examples --upgrade` to update.\n\n## Quickstart\n\nTo run and chat with [LLaVA](https://www.jetson-ai-lab.com/tutorial_llava.html):\n\n```sh\nreComputer run llava\n```\n\n## Example list\n\nreComputer supports a list of examples from [jetson-ai-lab](https://www.jetson-ai-lab.com/)\n\nHere are some examples that can be run:\n\n| Example                                          | Type                     | Model/Data Size | Image Size | Command                                 |\n| ------------------------------------------------ | ------------------------ | --------------- | ---------- | --------------------------------------- |\n| \ud83c\udd95 llama3                                         | Text (LLM)               | 4.9GB           | 10.5GB     | `reComputer run llama3`                 |\n| \ud83c\udd95 [ollama](https://github.com/ollama/ollama)     | Inference Server         | *               | 10.5GB     | `reComputer run ollama`                 |\n| LLaVA                                            | Text + Vision (VLM)      | 13GB            | 14.4GB     | `reComputer run llava`                  |\n| Live LLaVA                                       | Text + Vision (VLM)      | 13GB            | 20.3GB     | `reComputer run live-llava`             |\n| stable-diffusion-webui                           | Image Generation         | 3.97G           | 7.3GB      | `reComputer run stable-diffusion-webui` |\n| nanoowl                                          | Vision Transformers(ViT) | 613MB           | 15.1GB     | `reComputer run nanoowl`                |\n| [nanodb](../reComputer/scripts/nanodb/readme.md) | Vector Database          | 76GB            | 7.0GB      | `reComputer run nanodb`                 |\n| whisper                                          | Audio                    | 1.5GB           | 6.0GB      | `reComputer run whisper`                |\n\n> Note: You should have enough space to run example, like `LLaVA`, at least `27.4GB` totally\n\nMore Examples can be found [examples.md](./docs/examples.md)\n\nWant to add a Example by yourself? Check this [develop.md](./docs/develop.md)\n\n## TODO List\n\n- [ ] check disk space enough or not before run\n- [ ] allow to setting some configs, such as `BASE_PATH`\n- [ ] detect host environment and install what we need\n- [ ] support jetson-containers update\n- [ ] all type jetson support checking list\n- [ ] better table to show example's difference\n- [ ] try jetpack 6.0\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Running Gen AI models and applications on NVIDIA Jetson devices with one-line command",
    "version": "0.0.7",
    "project_urls": {
        "Homepage": "https://github.com/Seeed-Projects/jetson-examples",
        "Issues": "https://github.com/Seeed-Projects/jetson-examples/issues"
    },
    "split_keywords": [
        "llama",
        " llava",
        " gpt",
        " llm",
        " nvidia",
        " jetson",
        " multimodal",
        " jetson orin"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b950a0a9bdadaf6f6011c690d24d633bcb9dca387dc0e6bbdb4654a087d76b0",
                "md5": "2ae105ffa189f36472ab898e3bb4bc6b",
                "sha256": "35c9d0579581ac1e89eed4bc15948cce63ccf46c86200f97549ee87bfc4b9e16"
            },
            "downloads": -1,
            "filename": "jetson_examples-0.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2ae105ffa189f36472ab898e3bb4bc6b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 14929,
            "upload_time": "2024-04-19T06:18:33",
            "upload_time_iso_8601": "2024-04-19T06:18:33.745022Z",
            "url": "https://files.pythonhosted.org/packages/0b/95/0a0a9bdadaf6f6011c690d24d633bcb9dca387dc0e6bbdb4654a087d76b0/jetson_examples-0.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b6eb5255986dfbc0aca8fce9dcff44a32114d41f710111dcfb9e10e1d4436a69",
                "md5": "f04a17db3ece02c0b07ac5085a6c0170",
                "sha256": "da4b44f3f3abe24e46a16e43013734f8c4522d36affdffd02416734d270fd729"
            },
            "downloads": -1,
            "filename": "jetson_examples-0.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "f04a17db3ece02c0b07ac5085a6c0170",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 11080,
            "upload_time": "2024-04-19T06:18:35",
            "upload_time_iso_8601": "2024-04-19T06:18:35.606264Z",
            "url": "https://files.pythonhosted.org/packages/b6/eb/5255986dfbc0aca8fce9dcff44a32114d41f710111dcfb9e10e1d4436a69/jetson_examples-0.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-19 06:18:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Seeed-Projects",
    "github_project": "jetson-examples",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "jetson-examples"
}
        
Elapsed time: 0.24374s