openlrc


Nameopenlrc JSON
Version 1.6.0 PyPI version JSON
download
home_pagehttps://github.com/zh-plus/Open-Lyrics
SummaryTranscribe (whisper) and translate (gpt) voice into LRC file.
upload_time2024-12-09 11:03:44
maintainerNone
docs_urlNone
authorHao Zheng
requires_python<4.0,>=3.9
licenseMIT
keywords openai-gpt3 whisper voice transcribe lrc
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Open-Lyrics

[![PyPI](https://img.shields.io/pypi/v/openlrc)](https://pypi.org/project/openlrc/)
[![PyPI - License](https://img.shields.io/pypi/l/openlrc)](https://pypi.org/project/openlrc/)
[![Downloads](https://static.pepy.tech/badge/openlrc)](https://pepy.tech/project/openlrc)
![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/zh-plus/Open-Lyrics/ci.yml)

Open-Lyrics is a Python library that transcribes voice files using
[faster-whisper](https://github.com/guillaumekln/faster-whisper), and translates/polishes the resulting text
into `.lrc` files in the desired language using LLM,
e.g. [OpenAI-GPT](https://github.com/openai/openai-python), [Anthropic-Claude](https://github.com/anthropics/anthropic-sdk-python).

#### Key Features:

- Well preprocessed audio to reduce hallucination (Loudness Norm & optional Noise Suppression).
- Context-aware translation to improve translation quality.
  Check [prompt](https://github.com/zh-plus/openlrc/blob/master/openlrc/prompter.py) for details.
- Check [here](#how-it-works) for an overview of the architecture.

## New 🚨

- 2024.5.7:
    - Add custom endpoint (base_url) support for OpenAI & Anthropic:
        ```python
        lrcer = LRCer(base_url_config={'openai': 'https://api.chatanywhere.tech',
                                       'anthropic': 'https://example/api'})
        ```
    - Generating bilingual subtitles
        ```python
        lrcer.run('./data/test.mp3', target_lang='zh-cn', bilingual_sub=True)
        ``` 
- 2024.5.11: Add glossary into prompt, which is confirmed to improve domain specific translation.
  Check [here](#glossary) for details.
- 2024.5.17: You can route model to arbitrary Chatbot SDK (either OpenAI or Anthropic) by setting `chatbot_model` to
  `provider: model_name` together with base_url_config:
    ```python
    lrcer = LRCer(chatbot_model='openai: claude-3-haiku-20240307',
                  base_url_config={'openai': 'https://api.g4f.icu/v1/'})
    ```
- 2024.6.25: Support Gemini as translation engine LLM, try using `gemini-1.5-flash`:
    ```python
    lrcer = LRCer(chatbot_model='gemini-1.5-flash')
    ```
- 2024.9.10: Now openlrc depends on
  a [specific commit](https://github.com/SYSTRAN/faster-whisper/commit/d57c5b40b06e59ec44240d93485a95799548af50) of
  faster-whisper, which is not published on PyPI. Install it from source:
    ```shell
    pip install "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/8327d8cc647266ed66f6cd878cf97eccface7351.tar.gz"
    ```

## Installation ⚙️

1. Please install CUDA 11.x and [cuDNN 8 for CUDA 11](https://developer.nvidia.com/cudnn) first according
   to https://opennmt.net/CTranslate2/installation.html to enable `faster-whisper`.

   `faster-whisper` also needs [cuBLAS for CUDA 11](https://developer.nvidia.com/cublas) installed.
   <details>
   <summary>For Windows Users (click to expand)</summary> 

   (For Windows Users only) Windows user can Download the libraries from Purfview's repository:

   Purfview's [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) provides the required NVIDIA
   libraries for Windows in a [single archive](https://github.com/Purfview/whisper-standalone-win/releases/tag/libs).
   Decompress the archive and place the libraries in a directory included in the `PATH`.

   </details>


2. Add LLM API keys, you can either:
    - Add your [OpenAI API key](https://platform.openai.com/account/api-keys) to environment variable `OPENAI_API_KEY`.
   - Add your [Anthropic API key](https://console.anthropic.com/settings/keys) to environment variable
     `ANTHROPIC_API_KEY`.
   - Add your [Google API Key](https://aistudio.google.com/app/apikey) to environment variable `GOOGLE_API_KEY`.

3. Install [ffmpeg](https://ffmpeg.org/download.html) and add `bin` directory
   to your `PATH`.

4. This project can be installed from PyPI:

    ```shell
    pip install openlrc
    ```

   or install directly from GitHub:

    ```shell
    pip install git+https://github.com/zh-plus/openlrc
    ```

5. Install latest [fast-whisper](https://github.com/guillaumekln/faster-whisper) from source:
   ```shell
   pip install "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/8327d8cc647266ed66f6cd878cf97eccface7351.tar.gz"
   ```

6. Install [PyTorch](https://pytorch.org/get-started/locally/):
   ```shell
   pip install --force-reinstall torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
   ```

7. Fix the `typing-extensions` issue:
   ```shell
   pip install typing-extensions -U
   ```

## Usage 🐍

[//]: # (### GUI)

[//]: # ()

[//]: # (> [!NOTE])

[//]: # (> We are migrating the GUI from streamlit to Gradio. The GUI is still under development.)

[//]: # ()

[//]: # (```shell)

[//]: # (openlrc gui)

[//]: # (```)

[//]: # ()

[//]: # (![]&#40;https://github.com/zh-plus/openlrc/blob/master/resources/streamlit_app.jpg?raw=true&#41;)

### Python code

```python
from openlrc import LRCer

if __name__ == '__main__':
    lrcer = LRCer()

    # Single file
    lrcer.run('./data/test.mp3',
              target_lang='zh-cn')  # Generate translated ./data/test.lrc with default translate prompt.

    # Multiple files
    lrcer.run(['./data/test1.mp3', './data/test2.mp3'], target_lang='zh-cn')
    # Note we run the transcription sequentially, but run the translation concurrently for each file.

    # Path can contain video
    lrcer.run(['./data/test_audio.mp3', './data/test_video.mp4'], target_lang='zh-cn')
    # Generate translated ./data/test_audio.lrc and ./data/test_video.srt

    # Use glossary to improve translation
    lrcer = LRCer(glossary='./data/aoe4-glossary.yaml')

    # To skip translation process
    lrcer.run('./data/test.mp3', target_lang='en', skip_trans=True)

    # Change asr_options or vad_options, check openlrc.defaults for details
    vad_options = {"threshold": 0.1}
    lrcer = LRCer(vad_options=vad_options)
    lrcer.run('./data/test.mp3', target_lang='zh-cn')

    # Enhance the audio using noise suppression (consume more time).
    lrcer.run('./data/test.mp3', target_lang='zh-cn', noise_suppress=True)

    # Change the LLM model for translation
    lrcer = LRCer(chatbot_model='claude-3-sonnet-20240229')
    lrcer.run('./data/test.mp3', target_lang='zh-cn')

    # Clear temp folder after processing done
    lrcer.run('./data/test.mp3', target_lang='zh-cn', clear_temp=True)

    # Change base_url
    lrcer = LRCer(base_url_config={'openai': 'https://api.g4f.icu/v1',
                                   'anthropic': 'https://example/api'})

    # Route model to arbitrary Chatbot SDK
    lrcer = LRCer(chatbot_model='openai: claude-3-sonnet-20240229',
                  base_url_config={'openai': 'https://api.g4f.icu/v1/'})

    # Bilingual subtitle
    lrcer.run('./data/test.mp3', target_lang='zh-cn', bilingual_sub=True)
```

Check more details in [Documentation](https://zh-plus.github.io/openlrc/#/).

### Glossary

Add glossary to improve domain specific translation. For example `aoe4-glossary.yaml`:

```json
{
  "aoe4": "帝国时代4",
  "feudal": "封建时代",
  "2TC": "双TC",
  "English": "英格兰文明",
  "scout": "侦察兵"
}
```

```python
lrcer = LRCer(glossary='./data/aoe4-glossary.yaml')
lrcer.run('./data/test.mp3', target_lang='zh-cn')
```

or directly use dictionary to add glossary:

```python
lrcer = LRCer(glossary={"aoe4": "帝国时代4", "feudal": "封建时代"})
lrcer.run('./data/test.mp3', target_lang='zh-cn')
```

## Pricing 💰

*pricing data from [OpenAI](https://openai.com/pricing)
and [Anthropic](https://docs.anthropic.com/claude/docs/models-overview#model-comparison)*

| Model Name                   | Pricing for 1M Tokens <br/>(Input/Output) (USD) | Cost for 1 Hour Audio <br/>(USD) |
|------------------------------|-------------------------------------------------|----------------------------------|
| `gpt-3.5-turbo`              | 0.5, 1.5                                        | 0.01                             |
| `gpt-4o-mini`                | 0.5, 1.5                                        | 0.01                             |
| `gpt-4-0125-preview`         | 10, 30                                          | 0.5                              |
| `gpt-4-turbo-preview`        | 10, 30                                          | 0.5                              |
| `gpt-4o`                     | 5, 15                                           | 0.25                             |
| `claude-3-haiku-20240307`    | 0.25, 1.25                                      | 0.015                            |
| `claude-3-sonnet-20240229`   | 3, 15                                           | 0.2                              |
| `claude-3-opus-20240229`     | 15, 75                                          | 1                                |
| `claude-3-5-sonnet-20240620` | 3, 15                                           | 0.2                              |
| `gemini-1.5-flash`           | 0.175, 2.1                                      | 0.01                             |
| `gemini-1.0-pro`             | 0.5, 1.5                                        | 0.01                             |
| `gemini-1.5-pro`             | 1.75, 21                                        | 0.1                              |
| `deepseek-chat`              | 0.18, 2.2                                       | 0.01                             |

**Note the cost is estimated based on the token count of the input and output text.
The actual cost may vary due to the language and audio speed.**

### Recommended translation model

For english audio, we recommend using `deepseek-chat`, `gpt-4o-mini` or `gemini-1.5-flash`.

For non-english audio, we recommend using `claude-3-5-sonnet-20240620`.

## How it works

![](https://github.com/zh-plus/openlrc/blob/master/resources/how-it-works.png?raw=true)

To maintain context between translation segments, the process is sequential for each audio file.


[//]: # (## Comparison to https://microsoft.github.io/autogen/docs/notebooks/agentchat_video_transcript_translate_with_whisper/)

## Todo

- [x] [Efficiency] Batched translate/polish for GPT request (enable contextual ability).
- [x] [Efficiency] Concurrent support for GPT request.
- [x] [Translation Quality] Make translate prompt more robust according to https://github.com/openai/openai-cookbook.
- [x] [Feature] Automatically fix json encoder error using GPT.
- [x] [Efficiency] Asynchronously perform transcription and translation for multiple audio inputs.
- [x] [Quality] Improve batched translation/polish prompt according
  to [gpt-subtrans](https://github.com/machinewrapped/gpt-subtrans).
- [x] [Feature] Input video support.
- [X] [Feature] Multiple output format support.
- [x] [Quality] Speech enhancement for input audio.
- [ ] [Feature] Preprocessor: Voice-music separation.
- [ ] [Feature] Align ground-truth transcription with audio.
- [ ] [Quality]
  Use [multilingual language model](https://www.sbert.net/docs/pretrained_models.html#multi-lingual-models) to assess
  translation quality.
- [ ] [Efficiency] Add Azure OpenAI Service support.
- [ ] [Quality] Use [claude](https://www.anthropic.com/index/introducing-claude) for translation.
- [ ] [Feature] Add local LLM support.
- [X] [Feature] Multiple translate engine (Anthropic, Microsoft, DeepL, Google, etc.) support.
- [ ] [**Feature**] Build
  a [electron + fastapi](https://ivanyu2021.hashnode.dev/electron-django-desktop-app-integrate-javascript-and-python)
  GUI for cross-platform application.
- [x] [Feature] Web-based [streamlit](https://streamlit.io/) GUI.
- [ ] Add [fine-tuned whisper-large-v2](https://huggingface.co/models?search=whisper-large-v2) models for common
  languages.
- [x] [Feature] Add custom OpenAI & Anthropic endpoint support.
- [ ] [Feature] Add local translation model support (e.g. [SakuraLLM](https://github.com/SakuraLLM/Sakura-13B-Galgame)).
- [ ] [Quality] Construct translation quality benchmark test for each patch.
- [ ] [Quality] Split subtitles using
  LLM ([ref](https://github.com/Huanshere/VideoLingo/blob/ff520309e958dd3048586837d09ce37d3e9ebabd/core/prompts_storage.py#L6)).
- [ ] [Quality] Trim extra long subtitle using
  LLM ([ref](https://github.com/Huanshere/VideoLingo/blob/ff520309e958dd3048586837d09ce37d3e9ebabd/core/prompts_storage.py#L311)).
- [ ] [Others] Add transcribed examples.
    - [ ] Song
    - [ ] Podcast
    - [ ] Audiobook

## Credits

- https://github.com/guillaumekln/faster-whisper
- https://github.com/m-bain/whisperX
- https://github.com/openai/openai-python
- https://github.com/openai/whisper
- https://github.com/machinewrapped/gpt-subtrans
- https://github.com/MicrosoftTranslator/Text-Translation-API-V3-Python
- https://github.com/streamlit/streamlit

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=zh-plus/Open-Lyrics&type=Date)](https://star-history.com/#zh-plus/Open-Lyrics&Date)

## Citation

```
@book{openlrc2024zh,
	title = {zh-plus/openlrc},
	url = {https://github.com/zh-plus/openlrc},
	author = {Hao, Zheng},
	date = {2024-09-10},
	year = {2024},
	month = {9},
	day = {10},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zh-plus/Open-Lyrics",
    "name": "openlrc",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "openai-gpt3, whisper, voice transcribe, lrc",
    "author": "Hao Zheng",
    "author_email": "zhenghaosustc@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d7/bb/88b05c86dd06ba59cc9911c758143ed8ca77890e02d012171276f1302bf3/openlrc-1.6.0.tar.gz",
    "platform": null,
    "description": "# Open-Lyrics\n\n[![PyPI](https://img.shields.io/pypi/v/openlrc)](https://pypi.org/project/openlrc/)\n[![PyPI - License](https://img.shields.io/pypi/l/openlrc)](https://pypi.org/project/openlrc/)\n[![Downloads](https://static.pepy.tech/badge/openlrc)](https://pepy.tech/project/openlrc)\n![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/zh-plus/Open-Lyrics/ci.yml)\n\nOpen-Lyrics is a Python library that transcribes voice files using\n[faster-whisper](https://github.com/guillaumekln/faster-whisper), and translates/polishes the resulting text\ninto `.lrc` files in the desired language using LLM,\ne.g. [OpenAI-GPT](https://github.com/openai/openai-python), [Anthropic-Claude](https://github.com/anthropics/anthropic-sdk-python).\n\n#### Key Features:\n\n- Well preprocessed audio to reduce hallucination (Loudness Norm & optional Noise Suppression).\n- Context-aware translation to improve translation quality.\n  Check [prompt](https://github.com/zh-plus/openlrc/blob/master/openlrc/prompter.py) for details.\n- Check [here](#how-it-works) for an overview of the architecture.\n\n## New \ud83d\udea8\n\n- 2024.5.7:\n    - Add custom endpoint (base_url) support for OpenAI & Anthropic:\n        ```python\n        lrcer = LRCer(base_url_config={'openai': 'https://api.chatanywhere.tech',\n                                       'anthropic': 'https://example/api'})\n        ```\n    - Generating bilingual subtitles\n        ```python\n        lrcer.run('./data/test.mp3', target_lang='zh-cn', bilingual_sub=True)\n        ``` \n- 2024.5.11: Add glossary into prompt, which is confirmed to improve domain specific translation.\n  Check [here](#glossary) for details.\n- 2024.5.17: You can route model to arbitrary Chatbot SDK (either OpenAI or Anthropic) by setting `chatbot_model` to\n  `provider: model_name` together with base_url_config:\n    ```python\n    lrcer = LRCer(chatbot_model='openai: claude-3-haiku-20240307',\n                  base_url_config={'openai': 'https://api.g4f.icu/v1/'})\n    ```\n- 2024.6.25: Support Gemini as translation engine LLM, try using `gemini-1.5-flash`:\n    ```python\n    lrcer = LRCer(chatbot_model='gemini-1.5-flash')\n    ```\n- 2024.9.10: Now openlrc depends on\n  a [specific commit](https://github.com/SYSTRAN/faster-whisper/commit/d57c5b40b06e59ec44240d93485a95799548af50) of\n  faster-whisper, which is not published on PyPI. Install it from source:\n    ```shell\n    pip install \"faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/8327d8cc647266ed66f6cd878cf97eccface7351.tar.gz\"\n    ```\n\n## Installation \u2699\ufe0f\n\n1. Please install CUDA 11.x and [cuDNN 8 for CUDA 11](https://developer.nvidia.com/cudnn) first according\n   to https://opennmt.net/CTranslate2/installation.html to enable `faster-whisper`.\n\n   `faster-whisper` also needs [cuBLAS for CUDA 11](https://developer.nvidia.com/cublas) installed.\n   <details>\n   <summary>For Windows Users (click to expand)</summary> \n\n   (For Windows Users only) Windows user can Download the libraries from Purfview's repository:\n\n   Purfview's [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) provides the required NVIDIA\n   libraries for Windows in a [single archive](https://github.com/Purfview/whisper-standalone-win/releases/tag/libs).\n   Decompress the archive and place the libraries in a directory included in the `PATH`.\n\n   </details>\n\n\n2. Add LLM API keys, you can either:\n    - Add your [OpenAI API key](https://platform.openai.com/account/api-keys) to environment variable `OPENAI_API_KEY`.\n   - Add your [Anthropic API key](https://console.anthropic.com/settings/keys) to environment variable\n     `ANTHROPIC_API_KEY`.\n   - Add your [Google API Key](https://aistudio.google.com/app/apikey) to environment variable `GOOGLE_API_KEY`.\n\n3. Install [ffmpeg](https://ffmpeg.org/download.html) and add `bin` directory\n   to your `PATH`.\n\n4. This project can be installed from PyPI:\n\n    ```shell\n    pip install openlrc\n    ```\n\n   or install directly from GitHub:\n\n    ```shell\n    pip install git+https://github.com/zh-plus/openlrc\n    ```\n\n5. Install latest [fast-whisper](https://github.com/guillaumekln/faster-whisper) from source:\n   ```shell\n   pip install \"faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/8327d8cc647266ed66f6cd878cf97eccface7351.tar.gz\"\n   ```\n\n6. Install [PyTorch](https://pytorch.org/get-started/locally/):\n   ```shell\n   pip install --force-reinstall torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124\n   ```\n\n7. Fix the `typing-extensions` issue:\n   ```shell\n   pip install typing-extensions -U\n   ```\n\n## Usage \ud83d\udc0d\n\n[//]: # (### GUI)\n\n[//]: # ()\n\n[//]: # (> [!NOTE])\n\n[//]: # (> We are migrating the GUI from streamlit to Gradio. The GUI is still under development.)\n\n[//]: # ()\n\n[//]: # (```shell)\n\n[//]: # (openlrc gui)\n\n[//]: # (```)\n\n[//]: # ()\n\n[//]: # (![]&#40;https://github.com/zh-plus/openlrc/blob/master/resources/streamlit_app.jpg?raw=true&#41;)\n\n### Python code\n\n```python\nfrom openlrc import LRCer\n\nif __name__ == '__main__':\n    lrcer = LRCer()\n\n    # Single file\n    lrcer.run('./data/test.mp3',\n              target_lang='zh-cn')  # Generate translated ./data/test.lrc with default translate prompt.\n\n    # Multiple files\n    lrcer.run(['./data/test1.mp3', './data/test2.mp3'], target_lang='zh-cn')\n    # Note we run the transcription sequentially, but run the translation concurrently for each file.\n\n    # Path can contain video\n    lrcer.run(['./data/test_audio.mp3', './data/test_video.mp4'], target_lang='zh-cn')\n    # Generate translated ./data/test_audio.lrc and ./data/test_video.srt\n\n    # Use glossary to improve translation\n    lrcer = LRCer(glossary='./data/aoe4-glossary.yaml')\n\n    # To skip translation process\n    lrcer.run('./data/test.mp3', target_lang='en', skip_trans=True)\n\n    # Change asr_options or vad_options, check openlrc.defaults for details\n    vad_options = {\"threshold\": 0.1}\n    lrcer = LRCer(vad_options=vad_options)\n    lrcer.run('./data/test.mp3', target_lang='zh-cn')\n\n    # Enhance the audio using noise suppression (consume more time).\n    lrcer.run('./data/test.mp3', target_lang='zh-cn', noise_suppress=True)\n\n    # Change the LLM model for translation\n    lrcer = LRCer(chatbot_model='claude-3-sonnet-20240229')\n    lrcer.run('./data/test.mp3', target_lang='zh-cn')\n\n    # Clear temp folder after processing done\n    lrcer.run('./data/test.mp3', target_lang='zh-cn', clear_temp=True)\n\n    # Change base_url\n    lrcer = LRCer(base_url_config={'openai': 'https://api.g4f.icu/v1',\n                                   'anthropic': 'https://example/api'})\n\n    # Route model to arbitrary Chatbot SDK\n    lrcer = LRCer(chatbot_model='openai: claude-3-sonnet-20240229',\n                  base_url_config={'openai': 'https://api.g4f.icu/v1/'})\n\n    # Bilingual subtitle\n    lrcer.run('./data/test.mp3', target_lang='zh-cn', bilingual_sub=True)\n```\n\nCheck more details in [Documentation](https://zh-plus.github.io/openlrc/#/).\n\n### Glossary\n\nAdd glossary to improve domain specific translation. For example `aoe4-glossary.yaml`:\n\n```json\n{\n  \"aoe4\": \"\u5e1d\u56fd\u65f6\u4ee34\",\n  \"feudal\": \"\u5c01\u5efa\u65f6\u4ee3\",\n  \"2TC\": \"\u53ccTC\",\n  \"English\": \"\u82f1\u683c\u5170\u6587\u660e\",\n  \"scout\": \"\u4fa6\u5bdf\u5175\"\n}\n```\n\n```python\nlrcer = LRCer(glossary='./data/aoe4-glossary.yaml')\nlrcer.run('./data/test.mp3', target_lang='zh-cn')\n```\n\nor directly use dictionary to add glossary:\n\n```python\nlrcer = LRCer(glossary={\"aoe4\": \"\u5e1d\u56fd\u65f6\u4ee34\", \"feudal\": \"\u5c01\u5efa\u65f6\u4ee3\"})\nlrcer.run('./data/test.mp3', target_lang='zh-cn')\n```\n\n## Pricing \ud83d\udcb0\n\n*pricing data from [OpenAI](https://openai.com/pricing)\nand [Anthropic](https://docs.anthropic.com/claude/docs/models-overview#model-comparison)*\n\n| Model Name                   | Pricing for 1M Tokens <br/>(Input/Output) (USD) | Cost for 1 Hour Audio <br/>(USD) |\n|------------------------------|-------------------------------------------------|----------------------------------|\n| `gpt-3.5-turbo`              | 0.5, 1.5                                        | 0.01                             |\n| `gpt-4o-mini`                | 0.5, 1.5                                        | 0.01                             |\n| `gpt-4-0125-preview`         | 10, 30                                          | 0.5                              |\n| `gpt-4-turbo-preview`        | 10, 30                                          | 0.5                              |\n| `gpt-4o`                     | 5, 15                                           | 0.25                             |\n| `claude-3-haiku-20240307`    | 0.25, 1.25                                      | 0.015                            |\n| `claude-3-sonnet-20240229`   | 3, 15                                           | 0.2                              |\n| `claude-3-opus-20240229`     | 15, 75                                          | 1                                |\n| `claude-3-5-sonnet-20240620` | 3, 15                                           | 0.2                              |\n| `gemini-1.5-flash`           | 0.175, 2.1                                      | 0.01                             |\n| `gemini-1.0-pro`             | 0.5, 1.5                                        | 0.01                             |\n| `gemini-1.5-pro`             | 1.75, 21                                        | 0.1                              |\n| `deepseek-chat`              | 0.18, 2.2                                       | 0.01                             |\n\n**Note the cost is estimated based on the token count of the input and output text.\nThe actual cost may vary due to the language and audio speed.**\n\n### Recommended translation model\n\nFor english audio, we recommend using `deepseek-chat`, `gpt-4o-mini` or `gemini-1.5-flash`.\n\nFor non-english audio, we recommend using `claude-3-5-sonnet-20240620`.\n\n## How it works\n\n![](https://github.com/zh-plus/openlrc/blob/master/resources/how-it-works.png?raw=true)\n\nTo maintain context between translation segments, the process is sequential for each audio file.\n\n\n[//]: # (## Comparison to https://microsoft.github.io/autogen/docs/notebooks/agentchat_video_transcript_translate_with_whisper/)\n\n## Todo\n\n- [x] [Efficiency] Batched translate/polish for GPT request (enable contextual ability).\n- [x] [Efficiency] Concurrent support for GPT request.\n- [x] [Translation Quality] Make translate prompt more robust according to https://github.com/openai/openai-cookbook.\n- [x] [Feature] Automatically fix json encoder error using GPT.\n- [x] [Efficiency] Asynchronously perform transcription and translation for multiple audio inputs.\n- [x] [Quality] Improve batched translation/polish prompt according\n  to [gpt-subtrans](https://github.com/machinewrapped/gpt-subtrans).\n- [x] [Feature] Input video support.\n- [X] [Feature] Multiple output format support.\n- [x] [Quality] Speech enhancement for input audio.\n- [ ] [Feature] Preprocessor: Voice-music separation.\n- [ ] [Feature] Align ground-truth transcription with audio.\n- [ ] [Quality]\n  Use [multilingual language model](https://www.sbert.net/docs/pretrained_models.html#multi-lingual-models) to assess\n  translation quality.\n- [ ] [Efficiency] Add Azure OpenAI Service support.\n- [ ] [Quality] Use [claude](https://www.anthropic.com/index/introducing-claude) for translation.\n- [ ] [Feature] Add local LLM support.\n- [X] [Feature] Multiple translate engine (Anthropic, Microsoft, DeepL, Google, etc.) support.\n- [ ] [**Feature**] Build\n  a [electron + fastapi](https://ivanyu2021.hashnode.dev/electron-django-desktop-app-integrate-javascript-and-python)\n  GUI for cross-platform application.\n- [x] [Feature] Web-based [streamlit](https://streamlit.io/) GUI.\n- [ ] Add [fine-tuned whisper-large-v2](https://huggingface.co/models?search=whisper-large-v2) models for common\n  languages.\n- [x] [Feature] Add custom OpenAI & Anthropic endpoint support.\n- [ ] [Feature] Add local translation model support (e.g. [SakuraLLM](https://github.com/SakuraLLM/Sakura-13B-Galgame)).\n- [ ] [Quality] Construct translation quality benchmark test for each patch.\n- [ ] [Quality] Split subtitles using\n  LLM ([ref](https://github.com/Huanshere/VideoLingo/blob/ff520309e958dd3048586837d09ce37d3e9ebabd/core/prompts_storage.py#L6)).\n- [ ] [Quality] Trim extra long subtitle using\n  LLM ([ref](https://github.com/Huanshere/VideoLingo/blob/ff520309e958dd3048586837d09ce37d3e9ebabd/core/prompts_storage.py#L311)).\n- [ ] [Others] Add transcribed examples.\n    - [ ] Song\n    - [ ] Podcast\n    - [ ] Audiobook\n\n## Credits\n\n- https://github.com/guillaumekln/faster-whisper\n- https://github.com/m-bain/whisperX\n- https://github.com/openai/openai-python\n- https://github.com/openai/whisper\n- https://github.com/machinewrapped/gpt-subtrans\n- https://github.com/MicrosoftTranslator/Text-Translation-API-V3-Python\n- https://github.com/streamlit/streamlit\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=zh-plus/Open-Lyrics&type=Date)](https://star-history.com/#zh-plus/Open-Lyrics&Date)\n\n## Citation\n\n```\n@book{openlrc2024zh,\n\ttitle = {zh-plus/openlrc},\n\turl = {https://github.com/zh-plus/openlrc},\n\tauthor = {Hao, Zheng},\n\tdate = {2024-09-10},\n\tyear = {2024},\n\tmonth = {9},\n\tday = {10},\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Transcribe (whisper) and translate (gpt) voice into LRC file.",
    "version": "1.6.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/zh-plus/Open-Lyrics/issues",
        "Homepage": "https://github.com/zh-plus/Open-Lyrics"
    },
    "split_keywords": [
        "openai-gpt3",
        " whisper",
        " voice transcribe",
        " lrc"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a92d85e7ecda291ad1cc68e188ab0612829dbdbd96cf54240ce0371b417ccd18",
                "md5": "e2fd1782dd5dcb8399748025c0414b3d",
                "sha256": "41e64374e4c4594b11f4af98a23570335218673bdf131d5be9e8fdfb76cc2039"
            },
            "downloads": -1,
            "filename": "openlrc-1.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e2fd1782dd5dcb8399748025c0414b3d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 56492,
            "upload_time": "2024-12-09T11:03:42",
            "upload_time_iso_8601": "2024-12-09T11:03:42.709236Z",
            "url": "https://files.pythonhosted.org/packages/a9/2d/85e7ecda291ad1cc68e188ab0612829dbdbd96cf54240ce0371b417ccd18/openlrc-1.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d7bb88b05c86dd06ba59cc9911c758143ed8ca77890e02d012171276f1302bf3",
                "md5": "cc431d7be8e80abccaae2305dd7e2846",
                "sha256": "dc0450849d3dcfce50c623a4710f752aa7cb326f9f28dc028391aabc072f31c2"
            },
            "downloads": -1,
            "filename": "openlrc-1.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "cc431d7be8e80abccaae2305dd7e2846",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 50295,
            "upload_time": "2024-12-09T11:03:44",
            "upload_time_iso_8601": "2024-12-09T11:03:44.196827Z",
            "url": "https://files.pythonhosted.org/packages/d7/bb/88b05c86dd06ba59cc9911c758143ed8ca77890e02d012171276f1302bf3/openlrc-1.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-09 11:03:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zh-plus",
    "github_project": "Open-Lyrics",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "openlrc"
}
        
Elapsed time: 3.24592s