ctranslate2


Namectranslate2 JSON
Version 4.2.0 PyPI version JSON
download
home_pagehttps://opennmt.net
SummaryFast inference engine for Transformer models
upload_time2024-04-10 17:23:45
maintainerNone
docs_urlNone
authorOpenNMT
requires_python>=3.8
licenseMIT
keywords opennmt nmt neural machine translation cuda mkl inference quantization
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![CI](https://github.com/OpenNMT/CTranslate2/workflows/CI/badge.svg)](https://github.com/OpenNMT/CTranslate2/actions?query=workflow%3ACI) [![PyPI version](https://badge.fury.io/py/ctranslate2.svg)](https://badge.fury.io/py/ctranslate2) [![Documentation](https://img.shields.io/badge/docs-latest-blue.svg)](https://opennmt.net/CTranslate2/) [![Gitter](https://badges.gitter.im/OpenNMT/CTranslate2.svg)](https://gitter.im/OpenNMT/CTranslate2?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Forum](https://img.shields.io/discourse/status?server=https%3A%2F%2Fforum.opennmt.net%2F)](https://forum.opennmt.net/)

# CTranslate2

CTranslate2 is a C++ and Python library for efficient inference with Transformer models.

The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to [accelerate and reduce the memory usage](#benchmarks) of Transformer models on CPU and GPU.

The following model types are currently supported:

* Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
* Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
* Encoder-only models: BERT, DistilBERT, XLM-RoBERTa

Compatible models should be first converted into an optimized model format. The library includes converters for multiple frameworks:

* [OpenNMT-py](https://opennmt.net/CTranslate2/guides/opennmt_py.html)
* [OpenNMT-tf](https://opennmt.net/CTranslate2/guides/opennmt_tf.html)
* [Fairseq](https://opennmt.net/CTranslate2/guides/fairseq.html)
* [Marian](https://opennmt.net/CTranslate2/guides/marian.html)
* [OPUS-MT](https://opennmt.net/CTranslate2/guides/opus_mt.html)
* [Transformers](https://opennmt.net/CTranslate2/guides/transformers.html)

The project is production-oriented and comes with [backward compatibility guarantees](https://opennmt.net/CTranslate2/versioning.html), but it also includes experimental features related to model compression and inference acceleration.

## Key features

* **Fast and efficient execution on CPU and GPU**<br/>The execution [is significantly faster and requires less resources](#benchmarks) than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc.
* **Quantization and reduced precision**<br/>The model serialization and computation support weights with [reduced precision](https://opennmt.net/CTranslate2/quantization.html): 16-bit floating points (FP16), 16-bit brain floating points (BF16), 16-bit integers (INT16), and 8-bit integers (INT8).
* **Multiple CPU architectures support**<br/>The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: [Intel MKL](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onemkl.html), [oneDNN](https://github.com/oneapi-src/oneDNN), [OpenBLAS](https://www.openblas.net/), [Ruy](https://github.com/google/ruy), and [Apple Accelerate](https://developer.apple.com/documentation/accelerate).
* **Automatic CPU detection and code dispatch**<br/>One binary can include multiple backends (e.g. Intel MKL and oneDNN) and instruction set architectures (e.g. AVX, AVX2) that are automatically selected at runtime based on the CPU information.
* **Parallel and asynchronous execution**<br/>Multiple batches can be processed in parallel and asynchronously using multiple GPUs or CPU cores.
* **Dynamic memory usage**<br/>The memory usage changes dynamically depending on the request size while still meeting performance requirements thanks to caching allocators on both CPU and GPU.
* **Lightweight on disk**<br/>Quantization can make the models 4 times smaller on disk with minimal accuracy loss.
* **Simple integration**<br/>The project has few dependencies and exposes simple APIs in [Python](https://opennmt.net/CTranslate2/python/overview.html) and C++ to cover most integration needs.
* **Configurable and interactive decoding**<br/>[Advanced decoding features](https://opennmt.net/CTranslate2/decoding.html) allow autocompleting a partial sequence and returning alternatives at a specific location in the sequence.
* **Support tensor parallelism for distributed inference**<br/>Very large model can be split into multiple GPUs. Following this [documentation](docs/parallel.md#model-and-tensor-parallelism) to set up the required environment.

Some of these features are difficult to achieve with standard deep learning frameworks and are the motivation for this project.

## Installation and usage

CTranslate2 can be installed with pip:

```bash
pip install ctranslate2
```

The Python module is used to convert models and can translate or generate text with few lines of code:

```python
translator = ctranslate2.Translator(translation_model_path)
translator.translate_batch(tokens)

generator = ctranslate2.Generator(generation_model_path)
generator.generate_batch(start_tokens)
```

See the [documentation](https://opennmt.net/CTranslate2) for more information and examples.

## Benchmarks

We translate the En->De test set *newstest2014* with multiple models:

* [OpenNMT-tf WMT14](https://opennmt.net/Models-tf/#translation): a base Transformer trained with OpenNMT-tf on the WMT14 dataset (4.5M lines)
* [OpenNMT-py WMT14](https://opennmt.net/Models-py/#translation): a base Transformer trained with OpenNMT-py on the WMT14 dataset (4.5M lines)
* [OPUS-MT](https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/en-de#opus-2020-02-26zip): a base Transformer trained with Marian on all OPUS data available on 2020-02-26 (81.9M lines)

The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the [benchmark scripts](tools/benchmark) for more details and reproduce these numbers.

**Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.**

#### CPU

| | Tokens per second | Max. memory | BLEU |
| --- | --- | --- | --- |
| **OpenNMT-tf WMT14 model** | | | |
| OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0) | 209.2 | 2653MB | 26.93 |
| **OpenNMT-py WMT14 model** | | | |
| OpenNMT-py 3.0.4 (with PyTorch 1.13.1) | 275.8 | 2012MB | 26.77 |
| - int8 | 323.3 | 1359MB | 26.72 |
| CTranslate2 3.6.0 | 658.8 | 849MB | 26.77 |
| - int16 | 733.0 | 672MB | 26.82 |
| - int8 | 860.2 | 529MB | 26.78 |
| - int8 + vmap | 1126.2 | 598MB | 26.64 |
| **OPUS-MT model** | | | |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 | 344.5 | 7605MB | 27.93 |
| - int16 | 330.2 | 5901MB | 27.65 |
| - int8 | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 | 525.0 | 721MB | 27.92 |
| - int16 | 596.1 | 660MB | 27.53 |
| - int8 | 696.1 | 516MB | 27.65 |

Executed with 4 threads on a [*c5.2xlarge*](https://aws.amazon.com/ec2/instance-types/c5/) Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.

#### GPU

| | Tokens per second | Max. GPU memory | Max. CPU memory | BLEU |
| --- | --- | --- | --- | --- |
| **OpenNMT-tf WMT14 model** | | | | |
| OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0) | 1483.5 | 3031MB | 3122MB | 26.94 |
| **OpenNMT-py WMT14 model** | | | | |
| OpenNMT-py 3.0.4 (with PyTorch 1.13.1) | 1795.2 | 2973MB | 3099MB | 26.77 |
| FasterTransformer 5.3 | 6979.0 | 2402MB | 1131MB | 26.77 |
| - float16 | 8592.5 | 1360MB | 1135MB | 26.80 |
| CTranslate2 3.6.0 | 6634.7 | 1261MB | 953MB | 26.77 |
| - int8 | 8567.2 | 1005MB | 807MB | 26.85 |
| - float16 | 10990.7 | 941MB | 807MB | 26.77 |
| - int8 + float16 | 8725.4 | 813MB | 800MB | 26.83 |
| **OPUS-MT model** | | | | |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 | 3241.0 | 3381MB | 2156MB | 27.92 |
| - float16 | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 | 5876.4 | 1197MB | 754MB | 27.92 |
| - int8 | 7521.9 | 1005MB | 792MB | 27.79 |
| - float16 | 9296.7 | 909MB | 814MB | 27.90 |
| - int8 + float16 | 8362.7 | 813MB | 766MB | 27.90 |

Executed with CUDA 11 on a [*g5.xlarge*](https://aws.amazon.com/ec2/instance-types/g5/) Amazon EC2 instance equipped with a NVIDIA A10G GPU (driver version: 510.47.03).

## Additional resources

* [Documentation](https://opennmt.net/CTranslate2)
* [Forum](https://forum.opennmt.net)
* [Gitter](https://gitter.im/OpenNMT/CTranslate2)

            

Raw data

            {
    "_id": null,
    "home_page": "https://opennmt.net",
    "name": "ctranslate2",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "opennmt nmt neural machine translation cuda mkl inference quantization",
    "author": "OpenNMT",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "[![CI](https://github.com/OpenNMT/CTranslate2/workflows/CI/badge.svg)](https://github.com/OpenNMT/CTranslate2/actions?query=workflow%3ACI) [![PyPI version](https://badge.fury.io/py/ctranslate2.svg)](https://badge.fury.io/py/ctranslate2) [![Documentation](https://img.shields.io/badge/docs-latest-blue.svg)](https://opennmt.net/CTranslate2/) [![Gitter](https://badges.gitter.im/OpenNMT/CTranslate2.svg)](https://gitter.im/OpenNMT/CTranslate2?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Forum](https://img.shields.io/discourse/status?server=https%3A%2F%2Fforum.opennmt.net%2F)](https://forum.opennmt.net/)\n\n# CTranslate2\n\nCTranslate2 is a C++ and Python library for efficient inference with Transformer models.\n\nThe project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to [accelerate and reduce the memory usage](#benchmarks) of Transformer models on CPU and GPU.\n\nThe following model types are currently supported:\n\n* Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper\n* Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon\n* Encoder-only models: BERT, DistilBERT, XLM-RoBERTa\n\nCompatible models should be first converted into an optimized model format. The library includes converters for multiple frameworks:\n\n* [OpenNMT-py](https://opennmt.net/CTranslate2/guides/opennmt_py.html)\n* [OpenNMT-tf](https://opennmt.net/CTranslate2/guides/opennmt_tf.html)\n* [Fairseq](https://opennmt.net/CTranslate2/guides/fairseq.html)\n* [Marian](https://opennmt.net/CTranslate2/guides/marian.html)\n* [OPUS-MT](https://opennmt.net/CTranslate2/guides/opus_mt.html)\n* [Transformers](https://opennmt.net/CTranslate2/guides/transformers.html)\n\nThe project is production-oriented and comes with [backward compatibility guarantees](https://opennmt.net/CTranslate2/versioning.html), but it also includes experimental features related to model compression and inference acceleration.\n\n## Key features\n\n* **Fast and efficient execution on CPU and GPU**<br/>The execution [is significantly faster and requires less resources](#benchmarks) than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc.\n* **Quantization and reduced precision**<br/>The model serialization and computation support weights with [reduced precision](https://opennmt.net/CTranslate2/quantization.html): 16-bit floating points (FP16), 16-bit brain floating points (BF16), 16-bit integers (INT16), and 8-bit integers (INT8).\n* **Multiple CPU architectures support**<br/>The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: [Intel MKL](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onemkl.html), [oneDNN](https://github.com/oneapi-src/oneDNN), [OpenBLAS](https://www.openblas.net/), [Ruy](https://github.com/google/ruy), and [Apple Accelerate](https://developer.apple.com/documentation/accelerate).\n* **Automatic CPU detection and code dispatch**<br/>One binary can include multiple backends (e.g. Intel MKL and oneDNN) and instruction set architectures (e.g. AVX, AVX2) that are automatically selected at runtime based on the CPU information.\n* **Parallel and asynchronous execution**<br/>Multiple batches can be processed in parallel and asynchronously using multiple GPUs or CPU cores.\n* **Dynamic memory usage**<br/>The memory usage changes dynamically depending on the request size while still meeting performance requirements thanks to caching allocators on both CPU and GPU.\n* **Lightweight on disk**<br/>Quantization can make the models 4 times smaller on disk with minimal accuracy loss.\n* **Simple integration**<br/>The project has few dependencies and exposes simple APIs in [Python](https://opennmt.net/CTranslate2/python/overview.html) and C++ to cover most integration needs.\n* **Configurable and interactive decoding**<br/>[Advanced decoding features](https://opennmt.net/CTranslate2/decoding.html) allow autocompleting a partial sequence and returning alternatives at a specific location in the sequence.\n* **Support tensor parallelism for distributed inference**<br/>Very large model can be split into multiple GPUs. Following this [documentation](docs/parallel.md#model-and-tensor-parallelism) to set up the required environment.\n\nSome of these features are difficult to achieve with standard deep learning frameworks and are the motivation for this project.\n\n## Installation and usage\n\nCTranslate2 can be installed with pip:\n\n```bash\npip install ctranslate2\n```\n\nThe Python module is used to convert models and can translate or generate text with few lines of code:\n\n```python\ntranslator = ctranslate2.Translator(translation_model_path)\ntranslator.translate_batch(tokens)\n\ngenerator = ctranslate2.Generator(generation_model_path)\ngenerator.generate_batch(start_tokens)\n```\n\nSee the [documentation](https://opennmt.net/CTranslate2) for more information and examples.\n\n## Benchmarks\n\nWe translate the En->De test set *newstest2014* with multiple models:\n\n* [OpenNMT-tf WMT14](https://opennmt.net/Models-tf/#translation): a base Transformer trained with OpenNMT-tf on the WMT14 dataset (4.5M lines)\n* [OpenNMT-py WMT14](https://opennmt.net/Models-py/#translation): a base Transformer trained with OpenNMT-py on the WMT14 dataset (4.5M lines)\n* [OPUS-MT](https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/en-de#opus-2020-02-26zip): a base Transformer trained with Marian on all OPUS data available on 2020-02-26 (81.9M lines)\n\nThe benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the [benchmark scripts](tools/benchmark) for more details and reproduce these numbers.\n\n**Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.**\n\n#### CPU\n\n| | Tokens per second | Max. memory | BLEU |\n| --- | --- | --- | --- |\n| **OpenNMT-tf WMT14 model** | | | |\n| OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0) | 209.2 | 2653MB | 26.93 |\n| **OpenNMT-py WMT14 model** | | | |\n| OpenNMT-py 3.0.4 (with PyTorch 1.13.1) | 275.8 | 2012MB | 26.77 |\n| - int8 | 323.3 | 1359MB | 26.72 |\n| CTranslate2 3.6.0 | 658.8 | 849MB | 26.77 |\n| - int16 | 733.0 | 672MB | 26.82 |\n| - int8 | 860.2 | 529MB | 26.78 |\n| - int8 + vmap | 1126.2 | 598MB | 26.64 |\n| **OPUS-MT model** | | | |\n| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |\n| Marian 1.11.0 | 344.5 | 7605MB | 27.93 |\n| - int16 | 330.2 | 5901MB | 27.65 |\n| - int8 | 355.8 | 4763MB | 27.27 |\n| CTranslate2 3.6.0 | 525.0 | 721MB | 27.92 |\n| - int16 | 596.1 | 660MB | 27.53 |\n| - int8 | 696.1 | 516MB | 27.65 |\n\nExecuted with 4 threads on a [*c5.2xlarge*](https://aws.amazon.com/ec2/instance-types/c5/) Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.\n\n#### GPU\n\n| | Tokens per second | Max. GPU memory | Max. CPU memory | BLEU |\n| --- | --- | --- | --- | --- |\n| **OpenNMT-tf WMT14 model** | | | | |\n| OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0) | 1483.5 | 3031MB | 3122MB | 26.94 |\n| **OpenNMT-py WMT14 model** | | | | |\n| OpenNMT-py 3.0.4 (with PyTorch 1.13.1) | 1795.2 | 2973MB | 3099MB | 26.77 |\n| FasterTransformer 5.3 | 6979.0 | 2402MB | 1131MB | 26.77 |\n| - float16 | 8592.5 | 1360MB | 1135MB | 26.80 |\n| CTranslate2 3.6.0 | 6634.7 | 1261MB | 953MB | 26.77 |\n| - int8 | 8567.2 | 1005MB | 807MB | 26.85 |\n| - float16 | 10990.7 | 941MB | 807MB | 26.77 |\n| - int8 + float16 | 8725.4 | 813MB | 800MB | 26.83 |\n| **OPUS-MT model** | | | | |\n| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |\n| Marian 1.11.0 | 3241.0 | 3381MB | 2156MB | 27.92 |\n| - float16 | 3962.4 | 3239MB | 1976MB | 27.94 |\n| CTranslate2 3.6.0 | 5876.4 | 1197MB | 754MB | 27.92 |\n| - int8 | 7521.9 | 1005MB | 792MB | 27.79 |\n| - float16 | 9296.7 | 909MB | 814MB | 27.90 |\n| - int8 + float16 | 8362.7 | 813MB | 766MB | 27.90 |\n\nExecuted with CUDA 11 on a [*g5.xlarge*](https://aws.amazon.com/ec2/instance-types/g5/) Amazon EC2 instance equipped with a NVIDIA A10G GPU (driver version: 510.47.03).\n\n## Additional resources\n\n* [Documentation](https://opennmt.net/CTranslate2)\n* [Forum](https://forum.opennmt.net)\n* [Gitter](https://gitter.im/OpenNMT/CTranslate2)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Fast inference engine for Transformer models",
    "version": "4.2.0",
    "project_urls": {
        "Documentation": "https://opennmt.net/CTranslate2",
        "Forum": "https://forum.opennmt.net",
        "Gitter": "https://gitter.im/OpenNMT/CTranslate2",
        "Homepage": "https://opennmt.net",
        "Source": "https://github.com/OpenNMT/CTranslate2"
    },
    "split_keywords": [
        "opennmt",
        "nmt",
        "neural",
        "machine",
        "translation",
        "cuda",
        "mkl",
        "inference",
        "quantization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "56e3baf19fd5e2d8708592885843c96bd5fdaedf10d22114eb22966e54c68183",
                "md5": "8582edb566f9ea0b2356076742467c23",
                "sha256": "25dd36aed5eb98c87bd5556455a94120cee63a7f13f96ca86797b0a0febe3c2c"
            },
            "downloads": -1,
            "filename": "ctranslate2-4.2.0-cp310-cp310-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "8582edb566f9ea0b2356076742467c23",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 14633805,
            "upload_time": "2024-04-10T17:23:45",
            "upload_time_iso_8601": "2024-04-10T17:23:45.076440Z",
            "url": "https://files.pythonhosted.org/packages/56/e3/baf19fd5e2d8708592885843c96bd5fdaedf10d22114eb22966e54c68183/ctranslate2-4.2.0-cp310-cp310-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "364fa9090319b2730836de6c47b08d8e8a479adb57ecaed7a39d416acd0003e8",
                "md5": "4c906969558a3e4dc609d56d521c792b",
                "sha256": "0607e18717a43ddc0fa1c1d96cce70b834eca56a1c9bfbf58905d8fcfcc8d24a"
            },
            "downloads": -1,
            "filename": "ctranslate2-4.2.0-cp310-cp310-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "4c906969558a3e4dc609d56d521c792b",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 1260182,
            "upload_time": "2024-04-10T17:23:49",
            "upload_time_iso_8601": "2024-04-10T17:23:49.072798Z",
            "url": "https://files.pythonhosted.org/packages/36/4f/a9090319b2730836de6c47b08d8e8a479adb57ecaed7a39d416acd0003e8/ctranslate2-4.2.0-cp310-cp310-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "80e5c920e536a25ec32ff98d6b8e6ebb45224cce829b4a6a45b2529c1f5a2435",
                "md5": "231d20c284f6998eaa4ff1294a592c1a",
                "sha256": "4d14daadc163b54c29b382caeb1e0611f16fa1868c5e03d643d27ce05b2aa10a"
            },
            "downloads": -1,
            "filename": "ctranslate2-4.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "has_sig": false,
            "md5_digest": "231d20c284f6998eaa4ff1294a592c1a",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 15601191,
            "upload_time": "2024-04-10T17:23:52",
            "upload_time_iso_8601": "2024-04-10T17:23:52.500510Z",
            "url": "https://files.pythonhosted.org/packages/80/e5/c920e536a25ec32ff98d6b8e6ebb45224cce829b4a6a45b2529c1f5a2435/ctranslate2-4.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-10 17:23:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "OpenNMT",
    "github_project": "CTranslate2",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "ctranslate2"
}
        
Elapsed time: 0.26257s