Name | textembed JSON |
Version |
0.0.8
JSON |
| download |
home_page | https://github.com/kevaldekivadiya2415/textembed |
Summary | TextEmbed provides a robust and scalable REST API for generating vector embeddings from text. Built for performance and flexibility, it supports various sentence-transformer models, allowing users to easily integrate state-of-the-art NLP techniques into their applications. Whether you need embeddings for search, recommendation, or other NLP tasks, TextEmbed delivers with high efficiency. |
upload_time | 2024-06-13 03:05:31 |
maintainer | None |
docs_url | None |
author | Keval Dekivadiya |
requires_python | >=3.10.0 |
license | Apache License 2.0 |
keywords |
embedding
rag
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
[![Contributors](https://img.shields.io/github/contributors/kevaldekivadiya2415/textembed.svg)](https://github.com/kevaldekivadiya2415/textembed/graphs/contributors)
[![Issues](https://img.shields.io/github/issues/kevaldekivadiya2415/textembed.svg)](https://github.com/kevaldekivadiya2415/textembed/issues)
[![Apache License 2.0](https://img.shields.io/github/license/kevaldekivadiya2415/textembed.svg)](https://github.com/kevaldekivadiya2415/textembed/blob/main/LICENSE)
[![Downloads](https://static.pepy.tech/badge/textembed)](https://pepy.tech/project/textembed)
[![Docker Pulls](https://img.shields.io/docker/pulls/kevaldekivadiya/textembed.svg)](https://hub.docker.com/r/kevaldekivadiya/textembed)
[![PyPI - Version](https://img.shields.io/pypi/v/textembed)](https://pypi.org/project/textembed/)
[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=kevaldekivadiya2415_textembed&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=kevaldekivadiya2415_textembed)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=kevaldekivadiya2415_textembed&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=kevaldekivadiya2415_textembed)
# TextEmbed - Embedding Inference Server
TextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.
## Features
- **High Throughput & Low Latency:** Designed to handle a large number of requests efficiently.
- **Flexible Model Support:** Works with various sentence-transformer models.
- **Scalable:** Easily integrates into larger systems and scales with demand.
- **Batch Processing:** Supports batch processing for better and faster inference.
- **OpenAI Compatible REST API Endpoint:** Provides an OpenAI compatible REST API endpoint.
- **Single Line Command Deployment:** Deploy multiple models via a single command for efficient deployment.
- **Support for Embedding Formats:** Supports binary, float16, and float32 embeddings formats for faster retrieval.
## Getting Started
### Prerequisites
Ensure you have Python 3.10 or higher installed. You will also need to install the required dependencies.
### Installation
1. Install the required dependencies:
```bash
pip install -U textembed
```
2. Start the TextEmbed server with your desired models:
```bash
python3 -m textembed.server --models <Model1>, <Model2> --port <Port>
```
Replace `<Model1>` and `<Model2>` with the names of the models you want to use, separated by commas. Replace `<Port>` with the port number on which you want to run the server.
For more information about the Docker deployment and configuration, please refer to the documentation [setup.md](docs/setup.md).
Raw data
{
"_id": null,
"home_page": "https://github.com/kevaldekivadiya2415/textembed",
"name": "textembed",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10.0",
"maintainer_email": null,
"keywords": "Embedding, RAG",
"author": "Keval Dekivadiya",
"author_email": "kevaldekivadiya2415@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/3f/a4/2d6d090aeda3003bbbda396a3be8ae1c5db68db38e44b0070f4f608c07f1/textembed-0.0.8.tar.gz",
"platform": null,
"description": "[![Contributors](https://img.shields.io/github/contributors/kevaldekivadiya2415/textembed.svg)](https://github.com/kevaldekivadiya2415/textembed/graphs/contributors)\n[![Issues](https://img.shields.io/github/issues/kevaldekivadiya2415/textembed.svg)](https://github.com/kevaldekivadiya2415/textembed/issues)\n[![Apache License 2.0](https://img.shields.io/github/license/kevaldekivadiya2415/textembed.svg)](https://github.com/kevaldekivadiya2415/textembed/blob/main/LICENSE)\n[![Downloads](https://static.pepy.tech/badge/textembed)](https://pepy.tech/project/textembed)\n[![Docker Pulls](https://img.shields.io/docker/pulls/kevaldekivadiya/textembed.svg)](https://hub.docker.com/r/kevaldekivadiya/textembed)\n[![PyPI - Version](https://img.shields.io/pypi/v/textembed)](https://pypi.org/project/textembed/)\n[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=kevaldekivadiya2415_textembed&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=kevaldekivadiya2415_textembed)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=kevaldekivadiya2415_textembed&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=kevaldekivadiya2415_textembed)\n\n\n\n# TextEmbed - Embedding Inference Server\n\nTextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.\n\n## Features\n\n- **High Throughput & Low Latency:** Designed to handle a large number of requests efficiently.\n- **Flexible Model Support:** Works with various sentence-transformer models.\n- **Scalable:** Easily integrates into larger systems and scales with demand.\n- **Batch Processing:** Supports batch processing for better and faster inference.\n- **OpenAI Compatible REST API Endpoint:** Provides an OpenAI compatible REST API endpoint.\n- **Single Line Command Deployment:** Deploy multiple models via a single command for efficient deployment.\n- **Support for Embedding Formats:** Supports binary, float16, and float32 embeddings formats for faster retrieval.\n\n## Getting Started\n\n### Prerequisites\n\nEnsure you have Python 3.10 or higher installed. You will also need to install the required dependencies.\n\n### Installation\n\n1. Install the required dependencies:\n ```bash\n pip install -U textembed\n ```\n\n2. Start the TextEmbed server with your desired models:\n ```bash\n python3 -m textembed.server --models <Model1>, <Model2> --port <Port>\n ```\n\n Replace `<Model1>` and `<Model2>` with the names of the models you want to use, separated by commas. Replace `<Port>` with the port number on which you want to run the server.\n\nFor more information about the Docker deployment and configuration, please refer to the documentation [setup.md](docs/setup.md).\n",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "TextEmbed provides a robust and scalable REST API for generating vector embeddings from text. Built for performance and flexibility, it supports various sentence-transformer models, allowing users to easily integrate state-of-the-art NLP techniques into their applications. Whether you need embeddings for search, recommendation, or other NLP tasks, TextEmbed delivers with high efficiency.",
"version": "0.0.8",
"project_urls": {
"Homepage": "https://github.com/kevaldekivadiya2415/textembed"
},
"split_keywords": [
"embedding",
" rag"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3ef9ebf165504d63659baa9b9d8eca54139865ba9ab70535a49360b97368b27b",
"md5": "4a7e78ff05dd3ed9bd217459ff4ab1cf",
"sha256": "8034c1f5bf7d705564b14282998aead308e928969930ab4e370c83362b213d0a"
},
"downloads": -1,
"filename": "textembed-0.0.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4a7e78ff05dd3ed9bd217459ff4ab1cf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10.0",
"size": 26474,
"upload_time": "2024-06-13T03:05:30",
"upload_time_iso_8601": "2024-06-13T03:05:30.246165Z",
"url": "https://files.pythonhosted.org/packages/3e/f9/ebf165504d63659baa9b9d8eca54139865ba9ab70535a49360b97368b27b/textembed-0.0.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3fa42d6d090aeda3003bbbda396a3be8ae1c5db68db38e44b0070f4f608c07f1",
"md5": "c362c242d3ae5cf026d908c21c8e9094",
"sha256": "b8be1d2ce72c87efe805c83db0c6868bd7416e7bb3a724fe67123f281c9a0aed"
},
"downloads": -1,
"filename": "textembed-0.0.8.tar.gz",
"has_sig": false,
"md5_digest": "c362c242d3ae5cf026d908c21c8e9094",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10.0",
"size": 22703,
"upload_time": "2024-06-13T03:05:31",
"upload_time_iso_8601": "2024-06-13T03:05:31.709687Z",
"url": "https://files.pythonhosted.org/packages/3f/a4/2d6d090aeda3003bbbda396a3be8ae1c5db68db38e44b0070f4f608c07f1/textembed-0.0.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-13 03:05:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kevaldekivadiya2415",
"github_project": "textembed",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "textembed"
}