nexaai-gpu


Namenexaai-gpu JSON
Version 0.0.1.dev3 PyPI version JSON
download
home_pagehttps://github.com/NexaAI/nexa-sdk
SummaryNexa AI SDK
upload_time2024-08-14 16:15:36
maintainerNone
docs_urlNone
authorNexa AI
requires_python>=3.7
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Nexa SDK

The Nexa SDK is a comprehensive toolkit for supporting **ONNX** and **GGML** models. It supports text generation, image generation, vision-language models (VLM), and text-to-speech (TTS) capabilities. Additionally, it offers an OpenAI-compatible API server with JSON schema mode for function calling and streaming support, and a user-friendly Streamlit UI.

## Features
- **Model Support:**
  - **ONNX & GGML models**
  - **Conversion Engine**
  - **Inference Engine**:
    - **Text Generation**
    - **Image Generation**
    - **Vision-Language Models (VLM)**
    - **Text-to-Speech (TTS)**
- **Server:**
  - OpenAI-compatible API
  - JSON schema mode for function calling
  - Streaming support
- **Streamlit UI** for interactive model deployment and testing

## Installation
For CPU version
```
pip install nexaai
pip install nexaai[onnx] # if you want to use ONNX models
```
For GPU version
```
pip install nexaai-gpu
pip install nexaai-gpu[onnx] # if you want to use ONNX models
```

## Nexa CLI commands
## Model Commands

### NLP Models

| Model        | Type   | Format        | Command                          |
|--------------|--------|---------------|----------------------------------|
| octopus-v2   | NLP    | GGUF          | `nexa-cli gen-text octopus-v2`   |
| octopus-v4   | NLP    | GGUF          | `nexa-cli gen-text octopus-v4`   |
| tinyllama    | NLP    | GGUF          | `nexa-cli gen-text tinyllama`    |
| llama2       | NLP    | GGUF/ONNX     | `nexa-cli gen-text llama2`       |
| llama3       | NLP    | GGUF/ONNX     | `nexa-cli gen-text llama3`       |
| llama3.1     | NLP    | GGUF/ONNX     | `nexa-cli gen-text llama3.1`     |
| gemma        | NLP    | GGUF/ONNX     | `nexa-cli gen-text gemma`        |
| gemma2       | NLP    | GGUF          | `nexa-cli gen-text gemma2`       |
| qwen1.5      | NLP    | GGUF          | `nexa-cli gen-text qwen1.5`      |
| qwen2        | NLP    | GGUF/ONNX     | `nexa-cli gen-text qwen2`        |
| mistral      | NLP    | GGUF/ONNX     | `nexa-cli gen-text mistral`      |
| codegemma    | NLP    | GGUF          | `nexa-cli gen-text codegemma`    |
| codellama    | NLP    | GGUF          | `nexa-cli gen-text codellama`    |
| codeqwen     | NLP    | GGUF          | `nexa-cli gen-text codeqwen`     |
| deepseek-coder | NLP  | GGUF          | `nexa-cli gen-text deepseek-coder` |
| dolphin-mistral | NLP | GGUF          | `nexa-cli gen-text dolphin-mistral` |
| nomic-embed-text | NLP | GGUF         | `nexa-cli gen-text nomic-embed-text` |
| phi2         | NLP    | GGUF          | `nexa-cli gen-text phi2`         |
| phi3         | NLP    | GGUF/ONNX     | `nexa-cli gen-text phi3`         |

### Multimodal Models

| Model            | Type        | Format        | Command                         |
|------------------|-------------|---------------|---------------------------------|
| nanollava        | Multimodal  | GGUF          | `nexa-cli vlm nanollava`        |
| llava-phi3       | Multimodal  | GGUF          | `nexa-cli vlm llava-phi3`       |
| llava-llama3     | Multimodal  | GGUF          | `nexa-cli vlm llava-llama3`     |
| llava1.6-mistral | Multimodal  | GGUF          | `nexa-cli vlm llava1.6-mistral` |
| llava1.6-vicuna  | Multimodal  | GGUF          | `nexa-cli vlm llava1.6-vicuna`  |

### Computer Vision Models

| Model                | Type             | Format        | Command                             |
|----------------------|------------------|---------------|-------------------------------------|
| stable-diffusion-v1-4 | Computer Vision | GGUF          | `nexa-cli gen-image sd1-4`          |
| stable-diffusion-v1-5 | Computer Vision | GGUF/ONNX     | `nexa-cli gen-image sd1-5`          |
| lcm-dreamshaper       | Computer Vision | GGUF/ONNX     | `nexa-cli gen-image lcm-dreamshaper` |

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NexaAI/nexa-sdk",
    "name": "nexaai-gpu",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "Nexa AI",
    "author_email": "octopus@nexa4ai.com",
    "download_url": null,
    "platform": null,
    "description": "# Nexa SDK\n\nThe Nexa SDK is a comprehensive toolkit for supporting **ONNX** and **GGML** models. It supports text generation, image generation, vision-language models (VLM), and text-to-speech (TTS) capabilities. Additionally, it offers an OpenAI-compatible API server with JSON schema mode for function calling and streaming support, and a user-friendly Streamlit UI.\n\n## Features\n- **Model Support:**\n  - **ONNX & GGML models**\n  - **Conversion Engine**\n  - **Inference Engine**:\n    - **Text Generation**\n    - **Image Generation**\n    - **Vision-Language Models (VLM)**\n    - **Text-to-Speech (TTS)**\n- **Server:**\n  - OpenAI-compatible API\n  - JSON schema mode for function calling\n  - Streaming support\n- **Streamlit UI** for interactive model deployment and testing\n\n## Installation\nFor CPU version\n```\npip install nexaai\npip install nexaai[onnx] # if you want to use ONNX models\n```\nFor GPU version\n```\npip install nexaai-gpu\npip install nexaai-gpu[onnx] # if you want to use ONNX models\n```\n\n## Nexa CLI commands\n## Model Commands\n\n### NLP Models\n\n| Model        | Type   | Format        | Command                          |\n|--------------|--------|---------------|----------------------------------|\n| octopus-v2   | NLP    | GGUF          | `nexa-cli gen-text octopus-v2`   |\n| octopus-v4   | NLP    | GGUF          | `nexa-cli gen-text octopus-v4`   |\n| tinyllama    | NLP    | GGUF          | `nexa-cli gen-text tinyllama`    |\n| llama2       | NLP    | GGUF/ONNX     | `nexa-cli gen-text llama2`       |\n| llama3       | NLP    | GGUF/ONNX     | `nexa-cli gen-text llama3`       |\n| llama3.1     | NLP    | GGUF/ONNX     | `nexa-cli gen-text llama3.1`     |\n| gemma        | NLP    | GGUF/ONNX     | `nexa-cli gen-text gemma`        |\n| gemma2       | NLP    | GGUF          | `nexa-cli gen-text gemma2`       |\n| qwen1.5      | NLP    | GGUF          | `nexa-cli gen-text qwen1.5`      |\n| qwen2        | NLP    | GGUF/ONNX     | `nexa-cli gen-text qwen2`        |\n| mistral      | NLP    | GGUF/ONNX     | `nexa-cli gen-text mistral`      |\n| codegemma    | NLP    | GGUF          | `nexa-cli gen-text codegemma`    |\n| codellama    | NLP    | GGUF          | `nexa-cli gen-text codellama`    |\n| codeqwen     | NLP    | GGUF          | `nexa-cli gen-text codeqwen`     |\n| deepseek-coder | NLP  | GGUF          | `nexa-cli gen-text deepseek-coder` |\n| dolphin-mistral | NLP | GGUF          | `nexa-cli gen-text dolphin-mistral` |\n| nomic-embed-text | NLP | GGUF         | `nexa-cli gen-text nomic-embed-text` |\n| phi2         | NLP    | GGUF          | `nexa-cli gen-text phi2`         |\n| phi3         | NLP    | GGUF/ONNX     | `nexa-cli gen-text phi3`         |\n\n### Multimodal Models\n\n| Model            | Type        | Format        | Command                         |\n|------------------|-------------|---------------|---------------------------------|\n| nanollava        | Multimodal  | GGUF          | `nexa-cli vlm nanollava`        |\n| llava-phi3       | Multimodal  | GGUF          | `nexa-cli vlm llava-phi3`       |\n| llava-llama3     | Multimodal  | GGUF          | `nexa-cli vlm llava-llama3`     |\n| llava1.6-mistral | Multimodal  | GGUF          | `nexa-cli vlm llava1.6-mistral` |\n| llava1.6-vicuna  | Multimodal  | GGUF          | `nexa-cli vlm llava1.6-vicuna`  |\n\n### Computer Vision Models\n\n| Model                | Type             | Format        | Command                             |\n|----------------------|------------------|---------------|-------------------------------------|\n| stable-diffusion-v1-4 | Computer Vision | GGUF          | `nexa-cli gen-image sd1-4`          |\n| stable-diffusion-v1-5 | Computer Vision | GGUF/ONNX     | `nexa-cli gen-image sd1-5`          |\n| lcm-dreamshaper       | Computer Vision | GGUF/ONNX     | `nexa-cli gen-image lcm-dreamshaper` |\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Nexa AI SDK",
    "version": "0.0.1.dev3",
    "project_urls": {
        "Homepage": "https://github.com/NexaAI/nexa-sdk"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6a514ff35b0ce11ba2d8a59e7fb47dbab63ea911ab363bc360174732bbbdaae1",
                "md5": "4eb6e021fcd3f3a6afda3a7f58e79421",
                "sha256": "2f43838104e6ee638def09b48a503bc16287ebf9782be0c1b0c293558979e110"
            },
            "downloads": -1,
            "filename": "nexaai_gpu-0.0.1.dev3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4eb6e021fcd3f3a6afda3a7f58e79421",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 159535,
            "upload_time": "2024-08-14T16:15:36",
            "upload_time_iso_8601": "2024-08-14T16:15:36.090826Z",
            "url": "https://files.pythonhosted.org/packages/6a/51/4ff35b0ce11ba2d8a59e7fb47dbab63ea911ab363bc360174732bbbdaae1/nexaai_gpu-0.0.1.dev3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-14 16:15:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NexaAI",
    "github_project": "nexa-sdk",
    "github_not_found": true,
    "lcname": "nexaai-gpu"
}
        
Elapsed time: 0.26829s