llama-index-embeddings-textembed


Namellama-index-embeddings-textembed JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryIntegration of TextEmbed with llama-index for embeddings.
upload_time2024-11-05 21:31:44
maintainerNone
docs_urlNone
authorKeval Dekivadiya
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TextEmbed - Embedding Inference Server

Maintained by Keval Dekivadiya, TextEmbed is licensed under the [Apache-2.0 License](https://opensource.org/licenses/Apache-2.0).

TextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.

## Features

- **High Throughput & Low Latency**: Designed to handle a large number of requests efficiently.
- **Flexible Model Support**: Works with various sentence-transformer models.
- **Scalable**: Easily integrates into larger systems and scales with demand.
- **Batch Processing**: Supports batch processing for better and faster inference.
- **OpenAI Compatible REST API Endpoint**: Provides an OpenAI compatible REST API endpoint.
- **Single Line Command Deployment**: Deploy multiple models via a single command for efficient deployment.
- **Support for Embedding Formats**: Supports binary, float16, and float32 embeddings formats for faster retrieval.

## Getting Started

### Prerequisites

Ensure you have Python 3.10 or higher installed. You will also need to install the required dependencies.

### Installation via PyPI

Install the required dependencies:

```bash
pip install -U textembed
```

### Start the TextEmbed Server

Start the TextEmbed server with your desired models:

```bash
python -m textembed.server --models sentence-transformers/all-MiniLM-L12-v2 --workers 4 --api-key TextEmbed
```

### Example Usage with llama-index

Here's a simple example to get you started with llama-index:

```python
from llama_index.embeddings.textembed import TextEmbedEmbedding

# Initialize the TextEmbedEmbedding class
embed = TextEmbedEmbedding(
    model_name="sentence-transformers/all-MiniLM-L12-v2",
    base_url="http://0.0.0.0:8000/v1",
    auth_token="TextEmbed",
)

# Get embeddings for a batch of texts
embeddings = embed.get_text_embedding_batch(
    [
        "It is raining cats and dogs here!",
        "India has a diverse cultural heritage.",
    ]
)

print(embeddings)
```

For more information, please read the [documentation](https://github.com/kevaldekivadiya2415/textembed/blob/main/docs/setup.md).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-embeddings-textembed",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "Keval Dekivadiya",
    "author_email": "kevaldekivadiya2415@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/b2/3d/1b8283c42c48a3cc66e9249a00a8f9035f1fb9b57fadaf53b7182f70f5e5/llama_index_embeddings_textembed-0.1.1.tar.gz",
    "platform": null,
    "description": "# TextEmbed - Embedding Inference Server\n\nMaintained by Keval Dekivadiya, TextEmbed is licensed under the [Apache-2.0 License](https://opensource.org/licenses/Apache-2.0).\n\nTextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.\n\n## Features\n\n- **High Throughput & Low Latency**: Designed to handle a large number of requests efficiently.\n- **Flexible Model Support**: Works with various sentence-transformer models.\n- **Scalable**: Easily integrates into larger systems and scales with demand.\n- **Batch Processing**: Supports batch processing for better and faster inference.\n- **OpenAI Compatible REST API Endpoint**: Provides an OpenAI compatible REST API endpoint.\n- **Single Line Command Deployment**: Deploy multiple models via a single command for efficient deployment.\n- **Support for Embedding Formats**: Supports binary, float16, and float32 embeddings formats for faster retrieval.\n\n## Getting Started\n\n### Prerequisites\n\nEnsure you have Python 3.10 or higher installed. You will also need to install the required dependencies.\n\n### Installation via PyPI\n\nInstall the required dependencies:\n\n```bash\npip install -U textembed\n```\n\n### Start the TextEmbed Server\n\nStart the TextEmbed server with your desired models:\n\n```bash\npython -m textembed.server --models sentence-transformers/all-MiniLM-L12-v2 --workers 4 --api-key TextEmbed\n```\n\n### Example Usage with llama-index\n\nHere's a simple example to get you started with llama-index:\n\n```python\nfrom llama_index.embeddings.textembed import TextEmbedEmbedding\n\n# Initialize the TextEmbedEmbedding class\nembed = TextEmbedEmbedding(\n    model_name=\"sentence-transformers/all-MiniLM-L12-v2\",\n    base_url=\"http://0.0.0.0:8000/v1\",\n    auth_token=\"TextEmbed\",\n)\n\n# Get embeddings for a batch of texts\nembeddings = embed.get_text_embedding_batch(\n    [\n        \"It is raining cats and dogs here!\",\n        \"India has a diverse cultural heritage.\",\n    ]\n)\n\nprint(embeddings)\n```\n\nFor more information, please read the [documentation](https://github.com/kevaldekivadiya2415/textembed/blob/main/docs/setup.md).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Integration of TextEmbed with llama-index for embeddings.",
    "version": "0.1.1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5220dbad9b3c0810a59521c7f18d789d45b713e1c8f1ba73d85eae3ab196ff15",
                "md5": "c4987e9c0505fd7a09150aca9069df86",
                "sha256": "46474731952e21e0fe1bf09ec1d9cfe52ed0d890f203bf59911420805a6b8cf5"
            },
            "downloads": -1,
            "filename": "llama_index_embeddings_textembed-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c4987e9c0505fd7a09150aca9069df86",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 4290,
            "upload_time": "2024-11-05T21:31:43",
            "upload_time_iso_8601": "2024-11-05T21:31:43.114473Z",
            "url": "https://files.pythonhosted.org/packages/52/20/dbad9b3c0810a59521c7f18d789d45b713e1c8f1ba73d85eae3ab196ff15/llama_index_embeddings_textembed-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b23d1b8283c42c48a3cc66e9249a00a8f9035f1fb9b57fadaf53b7182f70f5e5",
                "md5": "ee0ab68f1da05fde28ebcaad03fd8116",
                "sha256": "ee4a78170be5788e7d527a15a8c967c4bf344b731526a3aa466bc939059a1941"
            },
            "downloads": -1,
            "filename": "llama_index_embeddings_textembed-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "ee0ab68f1da05fde28ebcaad03fd8116",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 3825,
            "upload_time": "2024-11-05T21:31:44",
            "upload_time_iso_8601": "2024-11-05T21:31:44.334860Z",
            "url": "https://files.pythonhosted.org/packages/b2/3d/1b8283c42c48a3cc66e9249a00a8f9035f1fb9b57fadaf53b7182f70f5e5/llama_index_embeddings_textembed-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-05 21:31:44",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-embeddings-textembed"
}
        
Elapsed time: 0.42132s