Name | faster-translate JSON |
Version |
1.0.2
JSON |
| download |
home_page | None |
Summary | A high-performance translation library using CTTranslate2 and vLLM. |
upload_time | 2025-03-03 23:14:12 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT License
Copyright (c) 2024 Sawradip Saha
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
translation
huggingface
nlp
ctranslate2
vllm
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Faster Translate
<div align="center">
[](https://pepy.tech/projects/faster-translate)
[](https://pepy.tech/projects/faster-translate)
[](https://github.com/sawradip/faster-translate/blob/main/LICENSE)
[](https://pypi.org/project/faster-translate/)
</div>
A high-performance translation library powered by state-of-the-art models. Faster Translate offers optimized inference using CTranslate2 and vLLM backends, providing an easy-to-use interface for applications requiring efficient and accurate translations.
## đ Features
- **High-performance inference** using CTranslate2 and vLLM backends
- **Seamless integration** with Hugging Face models
- **Flexible API** for single sentence, batch, and large-scale translation
- **Dataset translation** with direct Hugging Face integration
- **Multi-backend support** for both traditional (CTranslate2) and LLM-based (vLLM) models
- **Text normalization** for improved translation quality
## đĻ Installation
```bash
pip install faster-translate
```
### Optional Dependencies
For specific normalizers or model backends:
```bash
# For Bengali text normalization
pip install git+https://github.com/csebuetnlp/normalizer
# For vLLM backend support (required for LLM-based models)
pip install vllm
```
## đ Usage
### Basic Translation
```python
from faster_translate import TranslatorModel
# Initialize with a pre-configured model
translator = TranslatorModel.from_pretrained("banglanmt_bn2en")
# Translate a single sentence
english_text = translator.translate_single("āĻĻā§āĻļā§ āĻŦāĻŋāĻĻā§āĻļāĻŋ āĻāĻŖ āύāĻŋāϝāĻŧā§ āĻāĻāύ āĻŦā§āĻļ āĻāϞā§āĻāύāĻž āĻšāĻā§āĻā§āĨ¤")
print(english_text)
# Translate a batch of sentences
bengali_sentences = [
"āĻĻā§āĻļā§ āĻŦāĻŋāĻĻā§āĻļāĻŋ āĻāĻŖ āύāĻŋāϝāĻŧā§ āĻāĻāύ āĻŦā§āĻļ āĻāϞā§āĻāύāĻž āĻšāĻā§āĻā§āĨ¤",
"āϰāĻžāϤ āϤāĻŋāύāĻāĻžāϰ āĻĻāĻŋāĻā§ āĻāĻžāĻāĻāĻžāĻŽāĻžāϞ āύāĻŋāϝāĻŧā§ āĻā§āϞāĻŋāϏā§āϤāĻžāύ āĻĨā§āĻā§ āĻĒā§āϰāĻžāύ āĻĸāĻžāĻāĻžāϰ āĻļā§āϝāĻžāĻŽāĻŦāĻžāĻāĻžāϰā§āϰ āĻāĻĄāĻŧāϤ⧠āϝāĻžāĻā§āĻāĻŋāϞā§āύ āϞāĻŋāĻāύ āĻŦā§āϝāĻžāĻĒāĻžāϰā§āĨ¤"
]
translations = translator.translate_batch(bengali_sentences)
```
### Using Different Model Backends
```python
# Using a CTTranslate2-based model
ct2_translator = TranslatorModel.from_pretrained("banglanmt_bn2en")
# Using a vLLM-based model
vllm_translator = TranslatorModel.from_pretrained("bangla_qwen_en2bn")
```
### Loading Models from Hugging Face
```python
# Load a specific model from Hugging Face
translator = TranslatorModel.from_pretrained(
"sawradip/faster-translate-banglanmt-bn2en-t5",
normalizer_func="buetnlpnormalizer"
)
```
### Translating Hugging Face Datasets
Translate an entire dataset with a single function call:
```python
translator = TranslatorModel.from_pretrained("banglanmt_en2bn")
# Translate the entire dataset
translator.translate_hf_dataset(
"sawradip/bn-translation-mega-raw-noisy",
batch_size=16
)
# Translate specific subsets
translator.translate_hf_dataset(
"sawradip/bn-translation-mega-raw-noisy",
subset_name=["google"],
batch_size=16
)
# Translate a portion of the dataset
translator.translate_hf_dataset(
"sawradip/bn-translation-mega-raw-noisy",
subset_name="alt",
batch_size=16,
translation_size=0.5 # Translate 50% of the dataset
)
```
### Publishing Translated Datasets
Push translated datasets directly to Hugging Face:
```python
translator.translate_hf_dataset(
"sawradip/bn-translation-mega-raw-noisy",
subset_name="alt",
batch_size=16,
push_to_hub=True,
token="your_huggingface_token",
save_repo_name="your-username/translated-dataset"
)
```
## đ Supported Models
| Model ID | Source Language | Target Language | Backend | Description |
|----------|----------------|----------------|---------|-------------|
| `banglanmt_bn2en` | Bengali | English | CTranslate2 | BanglaNMT model from BUET |
| `banglanmt_en2bn` | English | Bengali | CTranslate2 | BanglaNMT model from BUET |
| `bangla_mbartv1_en2bn` | English | Bengali | CTranslate2 | MBart-based translation model |
| `bangla_qwen_en2bn` | English | Bengali | vLLM | Qwen-based translation model |
## đ ī¸ Advanced Configuration
### Custom Sampling Parameters for vLLM Models
```python
from vllm import SamplingParams
# Create custom sampling parameters
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.9,
max_tokens=512
)
# Initialize translator with custom parameters
translator = TranslatorModel.from_pretrained(
"bangla_qwen_en2bn",
sampling_params=sampling_params
)
```
## đĒ Contributors
<a href="https://github.com/sawradip/faster-translate/graphs/contributors">
<img src="https://contributors-img.web.app/image?repo=sawradip/faster-translate" alt="List of Contributors"/>
</a>
## đ License
This project is licensed under the MIT License - see the LICENSE file for details.
## đ Citation
If you use Faster Translate in your research, please cite:
```bibtex
@software{faster_translate,
author = {Sawradip Saha and Contributors},
title = {Faster Translate: High-Performance Machine Translation Library},
url = {https://github.com/sawradip/faster-translate},
year = {2024},
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "faster-translate",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "translation, huggingface, nlp, ctranslate2, vllm",
"author": null,
"author_email": "Sawradip Saha <sawradip0@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/73/53/9f3deb31c6b75e281dfd78d03c96eac2d8fe0b4eb9c00ff01ddca04f4ba3/faster_translate-1.0.2.tar.gz",
"platform": null,
"description": "# Faster Translate\n\n<div align=\"center\">\n\n[](https://pepy.tech/projects/faster-translate)\n[](https://pepy.tech/projects/faster-translate)\n[](https://github.com/sawradip/faster-translate/blob/main/LICENSE)\n[](https://pypi.org/project/faster-translate/)\n\n</div>\n\nA high-performance translation library powered by state-of-the-art models. Faster Translate offers optimized inference using CTranslate2 and vLLM backends, providing an easy-to-use interface for applications requiring efficient and accurate translations.\n\n## \ud83d\ude80 Features\n\n- **High-performance inference** using CTranslate2 and vLLM backends\n- **Seamless integration** with Hugging Face models\n- **Flexible API** for single sentence, batch, and large-scale translation\n- **Dataset translation** with direct Hugging Face integration\n- **Multi-backend support** for both traditional (CTranslate2) and LLM-based (vLLM) models\n- **Text normalization** for improved translation quality\n\n## \ud83d\udce6 Installation\n\n```bash\npip install faster-translate\n```\n\n### Optional Dependencies\n\nFor specific normalizers or model backends:\n\n```bash\n# For Bengali text normalization\npip install git+https://github.com/csebuetnlp/normalizer\n\n# For vLLM backend support (required for LLM-based models)\npip install vllm\n```\n\n## \ud83d\udd0d Usage\n\n### Basic Translation\n\n```python\nfrom faster_translate import TranslatorModel\n\n# Initialize with a pre-configured model\ntranslator = TranslatorModel.from_pretrained(\"banglanmt_bn2en\")\n\n# Translate a single sentence\nenglish_text = translator.translate_single(\"\u09a6\u09c7\u09b6\u09c7 \u09ac\u09bf\u09a6\u09c7\u09b6\u09bf \u098b\u09a3 \u09a8\u09bf\u09af\u09bc\u09c7 \u098f\u0996\u09a8 \u09ac\u09c7\u09b6 \u0986\u09b2\u09cb\u099a\u09a8\u09be \u09b9\u099a\u09cd\u099b\u09c7\u0964\")\nprint(english_text)\n\n# Translate a batch of sentences\nbengali_sentences = [\n \"\u09a6\u09c7\u09b6\u09c7 \u09ac\u09bf\u09a6\u09c7\u09b6\u09bf \u098b\u09a3 \u09a8\u09bf\u09af\u09bc\u09c7 \u098f\u0996\u09a8 \u09ac\u09c7\u09b6 \u0986\u09b2\u09cb\u099a\u09a8\u09be \u09b9\u099a\u09cd\u099b\u09c7\u0964\",\n \"\u09b0\u09be\u09a4 \u09a4\u09bf\u09a8\u099f\u09be\u09b0 \u09a6\u09bf\u0995\u09c7 \u0995\u09be\u0981\u099a\u09be\u09ae\u09be\u09b2 \u09a8\u09bf\u09af\u09bc\u09c7 \u0997\u09c1\u09b2\u09bf\u09b8\u09cd\u09a4\u09be\u09a8 \u09a5\u09c7\u0995\u09c7 \u09aa\u09c1\u09b0\u09be\u09a8 \u09a2\u09be\u0995\u09be\u09b0 \u09b6\u09cd\u09af\u09be\u09ae\u09ac\u09be\u099c\u09be\u09b0\u09c7\u09b0 \u0986\u09a1\u09bc\u09a4\u09c7 \u09af\u09be\u099a\u09cd\u099b\u09bf\u09b2\u09c7\u09a8 \u09b2\u09bf\u099f\u09a8 \u09ac\u09cd\u09af\u09be\u09aa\u09be\u09b0\u09c0\u0964\"\n]\ntranslations = translator.translate_batch(bengali_sentences)\n```\n\n### Using Different Model Backends\n\n```python\n# Using a CTTranslate2-based model\nct2_translator = TranslatorModel.from_pretrained(\"banglanmt_bn2en\")\n\n# Using a vLLM-based model\nvllm_translator = TranslatorModel.from_pretrained(\"bangla_qwen_en2bn\")\n```\n\n### Loading Models from Hugging Face\n\n```python\n# Load a specific model from Hugging Face\ntranslator = TranslatorModel.from_pretrained(\n \"sawradip/faster-translate-banglanmt-bn2en-t5\",\n normalizer_func=\"buetnlpnormalizer\"\n)\n```\n\n### Translating Hugging Face Datasets\n\nTranslate an entire dataset with a single function call:\n\n```python\ntranslator = TranslatorModel.from_pretrained(\"banglanmt_en2bn\")\n\n# Translate the entire dataset\ntranslator.translate_hf_dataset(\n \"sawradip/bn-translation-mega-raw-noisy\", \n batch_size=16\n)\n\n# Translate specific subsets\ntranslator.translate_hf_dataset(\n \"sawradip/bn-translation-mega-raw-noisy\",\n subset_name=[\"google\"], \n batch_size=16\n)\n\n# Translate a portion of the dataset\ntranslator.translate_hf_dataset(\n \"sawradip/bn-translation-mega-raw-noisy\",\n subset_name=\"alt\",\n batch_size=16, \n translation_size=0.5 # Translate 50% of the dataset\n)\n```\n\n### Publishing Translated Datasets\n\nPush translated datasets directly to Hugging Face:\n\n```python\ntranslator.translate_hf_dataset(\n \"sawradip/bn-translation-mega-raw-noisy\",\n subset_name=\"alt\",\n batch_size=16, \n push_to_hub=True,\n token=\"your_huggingface_token\",\n save_repo_name=\"your-username/translated-dataset\"\n)\n```\n\n## \ud83c\udf10 Supported Models\n\n| Model ID | Source Language | Target Language | Backend | Description |\n|----------|----------------|----------------|---------|-------------|\n| `banglanmt_bn2en` | Bengali | English | CTranslate2 | BanglaNMT model from BUET |\n| `banglanmt_en2bn` | English | Bengali | CTranslate2 | BanglaNMT model from BUET |\n| `bangla_mbartv1_en2bn` | English | Bengali | CTranslate2 | MBart-based translation model |\n| `bangla_qwen_en2bn` | English | Bengali | vLLM | Qwen-based translation model |\n\n## \ud83d\udee0\ufe0f Advanced Configuration\n\n### Custom Sampling Parameters for vLLM Models\n\n```python\nfrom vllm import SamplingParams\n\n# Create custom sampling parameters\nsampling_params = SamplingParams(\n temperature=0.7,\n top_p=0.9,\n max_tokens=512\n)\n\n# Initialize translator with custom parameters\ntranslator = TranslatorModel.from_pretrained(\n \"bangla_qwen_en2bn\", \n sampling_params=sampling_params\n)\n```\n\n## \ud83d\udcaa Contributors\n\n<a href=\"https://github.com/sawradip/faster-translate/graphs/contributors\">\n <img src=\"https://contributors-img.web.app/image?repo=sawradip/faster-translate\" alt=\"List of Contributors\"/>\n</a>\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## \ud83d\udcda Citation\n\nIf you use Faster Translate in your research, please cite:\n\n```bibtex\n@software{faster_translate,\n author = {Sawradip Saha and Contributors},\n title = {Faster Translate: High-Performance Machine Translation Library},\n url = {https://github.com/sawradip/faster-translate},\n year = {2024},\n}\n```\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024 Sawradip Saha\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.",
"summary": "A high-performance translation library using CTTranslate2 and vLLM.",
"version": "1.0.2",
"project_urls": {
"Bug Tracker": "https://github.com/sawradip/faster-translate/issues",
"Documentation": "https://github.com/sawradip/faster-translate/blob/main/README.md",
"Homepage": "https://github.com/sawradip/faster-translate"
},
"split_keywords": [
"translation",
" huggingface",
" nlp",
" ctranslate2",
" vllm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6f3d5d4ba2ebc57c0e6d740c6afcad3772b17038517bbb5965f32838efa85f3c",
"md5": "9400eae67e59b441853a07d9103d42cf",
"sha256": "13e25dbef4263a47eb5e7c4c6063efce69530158dffc24d31baaad05cb4095df"
},
"downloads": -1,
"filename": "faster_translate-1.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9400eae67e59b441853a07d9103d42cf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 11815,
"upload_time": "2025-03-03T23:14:10",
"upload_time_iso_8601": "2025-03-03T23:14:10.347401Z",
"url": "https://files.pythonhosted.org/packages/6f/3d/5d4ba2ebc57c0e6d740c6afcad3772b17038517bbb5965f32838efa85f3c/faster_translate-1.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "73539f3deb31c6b75e281dfd78d03c96eac2d8fe0b4eb9c00ff01ddca04f4ba3",
"md5": "9afdbe581c0f3bd80a1534d7cd11f96b",
"sha256": "d8810cbe18a4ab8d851f80c7991000ce5025891880a5bcf6bc87ee4016b58fc3"
},
"downloads": -1,
"filename": "faster_translate-1.0.2.tar.gz",
"has_sig": false,
"md5_digest": "9afdbe581c0f3bd80a1534d7cd11f96b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 13410,
"upload_time": "2025-03-03T23:14:12",
"upload_time_iso_8601": "2025-03-03T23:14:12.344412Z",
"url": "https://files.pythonhosted.org/packages/73/53/9f3deb31c6b75e281dfd78d03c96eac2d8fe0b4eb9c00ff01ddca04f4ba3/faster_translate-1.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-03 23:14:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sawradip",
"github_project": "faster-translate",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "faster-translate"
}