verbatim


Nameverbatim JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttps://github.com/gaspardpetit/verbatim
Summaryhigh quality multi-lingual speech to text
upload_time2024-02-02 06:17:23
maintainer
docs_urlNone
authorGaspard Petit
requires_python
licenseMIT
keywords speech-to-text audio processing multilingual natural language processing automatic speech recognition asr text transcription language support machine learning deep learning nlp linguistics voice recognition pytorch tensorflow audio analysis speech analytics spoken language processing i18n internationalization
VCS
bugtrack_url
requirements wheel setuptools
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPI version](https://badge.fury.io/py/verbatim.svg?)](https://pypi.python.org/pypi/verbatim/)
[![Python versions](https://img.shields.io/pypi/pyversions/verbatim.svg)](https://pypi.org/project/verbatim/)
[![Bandit](https://github.com/gaspardpetit/verbatim/actions/workflows/bandit.yml/badge.svg)](https://github.com/gaspardpetit/verbatim/actions/workflows/bandit.yml)
[![Pylint](https://github.com/gaspardpetit/verbatim/actions/workflows/pylint.yml/badge.svg)](https://github.com/gaspardpetit/verbatim/actions/workflows/pylint.yml)
[![Python package](https://github.com/gaspardpetit/verbatim/actions/workflows/python-package.yml/badge.svg)](https://github.com/gaspardpetit/verbatim/actions/workflows/python-package.yml)

# Verbatim

For high quality multi-lingual speech to text.

## Installation

Install from PyPI:
```
pip install verbatim
```

Install the latest from git:
```
pip install git+https://github.com/gaspardpetit/verbatim.git
```

## HuggingFace Token
This project requires access to the pyannote models which are gated:
1. Create an account on [Hugging Face](https://huggingface.co/)
2. Request access to the model at https://huggingface.co/pyannote/speaker-diarization-3.1
3. Request access to the model at https://huggingface.co/pyannote/segmentation-3.0
4. From your `Settings` > `Access Tokens`, generate an access token
5. When running verbatim for the first time, set the `TOKEN_HUGGINGFACE` environment variable to your Hugging Face token. Once the model is downloaded, this is no longer necessary.


## Usage (from terminal)

Simple usage
```bash
verbatim audio_file.mp3
```

Verbose
```bash
verbatim audio_file.mp3 -v
```

Very Verbose
```bash
verbatim audio_file.mp3 -vv
```

Force CPU only
```bash
verbatim audio_file.mp3 --cpu
```

Save file in a specific directory
```bash
verbatim audio_file.mp3 -o ./output/
```


## Usage (from Docker)
The tool can also be used within a docker container. This can be particularly convenient, in the context where the audio and transcription is confidential, to ensure that the tool is completely offline since docker using `--network none`

With GPU support
```bash
docker run --network none --shm-size 8G --gpus all \
    -v "/local/path/to/out/:/data/out/" \
    -v "/local/path/to/audio.mp3:/data/audio.mp3" ghcr.io/gaspardpetit/verbatim:latest \
    verbatim /data/audio.mp3 -o /data/out --language en fr"
```

Without GPU support
```bash
docker run --network none \
    -v "/local/path/to/out/:/data/out/" \
    -v "/local/path/to/audio.mp3:/data/audio.mp3" ghcr.io/gaspardpetit/verbatim:latest \
    verbatim /data/audio.mp3 -o /data/out --language en fr"
```


## Usage (from python)

```python 
from verbatim import Context, Pipeline
context: Context = Context(
    languages=["en", "fr"],
    nb_speakers=2,
    source_file="audio.mp3",
    out_dir="out")
pipeline: Pipeline = Pipeline(context=context)
pipeline.execute()
```

The project is organized to be modular, such that individual components can be used outside of the full pipeline, and the pipeline can be customized to use custom stages. For example, to use a custom diarization stage:


```python
from verbatim.speaker_diarization import DiarizeSpeakers
from verbatim import Context, Pipeline
my_cursom_diarization: DiarizeSpeakers = get_custom_diarization_stage()  

context: Context = Context(
    languages=["en", "fr"],
    nb_speakers=2,
    source_file="audio.mp3",
    out_dir="out")
pipeline: Pipeline = Pipeline(
    context=context, 
    diarize_speakers=my_cursom_diarization)
pipeline.execute()
```

This project aims at finding the best implementation for each stage and glue them together. Contributions with new implementations are welcome.

Each component may also be used independently, for example:

#### Separating Voice from Noise

Using MDX:
```python
from verbatim.voice_isolation import IsolateVoicesMDX
IsolateVoicesMDX().execute(
    audio_file_path="original.mp3" 
    voice_file_path="voice.wav")
```

Using Demucs:
```python
from verbatim.voice_isolation import IsolateVoicesDemucs
IsolateVoicesDemucs().execute(
    audio_file_path="original.mp3" 
    voice_file_path="voice.wav")
```

#### Diarization
Using Pyannote:
```python
from verbatim.speaker_diarization import DiarizeSpeakersPyannote
DiarizeSpeakersPyannote().execute(
    voice_file_path="voice.wav", 
    diarization_file="dia.rttm",
    max_speakers=4)
```

Using SpeechBrain:
```python
from verbatim.speaker_diarization import DiarizeSpeakersSpeechBrain
DiarizeSpeakersSpeechBrain().execute(
    voice_file_path="voice.wav", 
    diarization_file="dia.rttm",
    max_speakers=4)
```

#### Speech to Text

Using FasterWhisper:
```python
from verbatim.wav_conversion import ConvertToWav
from verbatim.speech_transcription import TranscribeSpeechFasterWhisper
TranscribeSpeechFasterWhisper().execute_segment(
        speech_segment_float32_16khz=ConvertToWav.load_float32_16khz_mono_audio("audio.mp3"),
        language="fr")
```

Using OpenAI Whisper:
```python
from verbatim.wav_conversion import ConvertToWav
from verbatim.speech_transcription import TranscribeSpeechWhisper
transcript = TranscribeSpeechWhisper().execute_segment(
    speech_segment_float32_16khz=ConvertToWav.load_float32_16khz_mono_audio("audio.mp3"),
    language="fr")
```

#### Transcription to Document

Saving to .docx:
```python
from verbatim.transcript_writing import WriteTranscriptDocx
WriteTranscriptDocx().execute(
    transcript=transcript,
    output_file="out.docx")
```

Saving to .ass:
```python
from verbatim.transcript_writing import WriteTranscriptAss
WriteTranscriptAss().execute(
    transcript=transcript,
    output_file="out.ass")
```

## Objectives

### High Quality
Many design decisions favour higher confidence over performance, including multiple passes in several parts to improve analysis.

### Language support

Languages supported by [openai/whisper](https://github.com/openai/whisper) using the [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) model should also work, including: Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Marathi, Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh

### Mixed language support
Speeches may comprise multiple languages. This includes different languages spoken one after the other (ex. two speakers alternating two languages) or multiple languages being mixed, such as the use of English expressions within a French speech.

### Speaker Identification
The speech recognition distinguishes between speakers using diarization based on  [pyannote](https://github.com/pyannote). 

### Word-Level Confidence
The output provides word-level confidence, with poorly recognized words clearly identified to guide manual editing.

### Time Tracking
The output text is associated with timestamps to facilitate source audio navigation when manually editing.

### Voice Isolation
Verbatim will work on unclean audio sources, for example where there might be music, key strokes from keyboards, background noise, etc. Voices are isolated from other sounds using [adefossez/demucs](https://github.com/adefossez/demucs).

For audit purposes, the audio that was removed because it was considered *background* noise is saved so it can be manually reviewed if necessary.

### Optional GPU Acceleration (on a 12GB VRAM Budget)
The current objective is to limit the VRAM requirements to 12GB, allowing cards such as NVidia RTX 4070 to accelerate the processing.

Verbatim will run on CPU, but processing should be expected to be slow.

### Long Audio Support (2h+)
The main use case for Verbatim is transcription of meeting. Consequently, it is designed to work with files containing at least 2 hours of audio.

### Audio Conversion
A variety of audio formats is support as input, including raw, compressed audio or even video files containing audio tracks. Any format supported by [ffmpeg](https://ffmpeg.org/) is accepted.

### Offline processing
100% offline to ensure confidentiality. The docker image may be executed with `--network none` to ensure that nothing reaches out.

### Output designed for auditing
The output includes 
- a subtitle track rendered over the original audio to review the results.
- a Word document identifying low-confidence words, speaker and timestamps to quickly jump to relevant sections and ensure no part has been omitted

## Processing Pipeline

![doc/architecture.svg](doc/img/Architecture.svg)

### 1. Injestion 🔊
Audio Files are converted ◌⃯ to raw audio using [ffmpeg](https://ffmpeg.org/). 

### 2. Voice Isolation đŸ—©

The voices are isolated using [karaokenerds/python-audio-separator](https://github.com/karaokenerds/python-audio-separator).

### 3. Diarization đŸ–č

Speakers are identified using [pyannote](https://github.com/pyannote). A diarizaton timeline is created with each speaker being assigned speech periods. When known, it is possible to set the number of speaker in advance for better results.

### 4. Language detection

The language used in each section of the diarization is identified using [SYSTRAN/faster-whisper](https://github.com/SYSTRAN/faster-whisper). For sections that fail to detect properly, the process is repeated with widening windows until the language can be determined with an acceptable level of certainty.

### 5. Speech to Text ✎

We use [SYSTRAN/faster-whisper](https://github.com/SYSTRAN/faster-whisper) for translation, using the [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) model which support mixture of language. It is still necessary to segment the audio, otherwise whisper eventually switches to translating instead of transcribing when the language requested does not match the speech.

Whisper provides state of the art transcription, but it is prone to hallucinations. A short audio segment may generate speech that does not exist with high level of certainty, making hallucinations difficult to detect. To reduce the likelihood of these occuranges, the audio track is split into multiple audio tracks, one for each `speaker`x`language` pair. Voice activity detection (VAD) is then performed using [speechbrain](https://github.com/speechbrain/speechbrain) to identify large audio segments that can be processed together without compromising word timestamp quality.

We use a different VAD for speaker diarization than speech-to-text processing. [pyannote](https://github.com/pyannote)'s VAD seemed more granular and better suited to identify short segments that may involve change in language or speaker, while [speechbrain](https://github.com/speechbrain/speechbrain)'s VAD seems more conservative, preferring larger segments, making it better suited for grouping large audio segments for speech-to-text while still allowing to skip large sections of silence.

### 6. Output

The output document is a Microsoft Word document which reflects many decisions of the pipeline. In particular, words with low confidence are highlighted for review. SubStation Alpha Subtitles are also provided, based on the implementation of [jianfch/stable-ts](https://github.com/jianfch/stable-ts).

## Sample

Consider the following audio file obtained from [universal-soundbank](https://universal-soundbank.com/sounds/12374.mp3) including a mixture of French and English:



https://github.com/gaspardpetit/verbatim/assets/9883156/23bc86d2-567e-4be3-8d79-ba625be8c614



First, we extract the background audio and remove it from the analysis:

**Background noise:**

https://github.com/gaspardpetit/verbatim/assets/9883156/42fad911-3c15-45c2-a40a-7f923fdd4533

Then we perform diarization and language detection. We correctly detect one speaker speaking in French and another one speaking in English:

**Speaker 0 | English:**

https://github.com/gaspardpetit/verbatim/assets/9883156/cecec5aa-cb09-473e-bf9b-c5fd82352dab

**Speaker 1 | French:**

https://github.com/gaspardpetit/verbatim/assets/9883156/8074c064-f4d2-4ec4-8fc0-c985f7c276e8

The output consists of a word document highlighting words with low certainty (low certainty are underlined and highlighted in yellow, while medium certainty are simply underlined):

![Microsoft Word Output](doc/img/word_output.png)

A subtitle file is also provided and can be attached to the original audio:

https://github.com/gaspardpetit/verbatim/assets/9883156/9bcc2553-f183-4def-a9c4-bb0c337d4c82

A direct use of whisper on an audio clip like this one results in many errors. Several utterances end up being translated instead of being transcribed, and others are simply unrecognized and missing:

<table>
  <tr>
    <th></th>
    <th><b>Naive Whisper Transcription</b></th>
    <th><b>Verbatim Transcription</b></th>
  </tr>

  <tr>
    <td>✅</td>
    <td>Madame, Monsieur, bonjour et bienvenue Ă  bord.</td>
    <td>Madame, Monsieur, bonjour et bienvenue Ă  bord.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>Bienvenue Ă  bord, Mesdames et Messieurs.</td>
    <td>Welcome aboard, ladies and gentlemen.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>Pour votre sécurité et votre confort, prenez un moment pour regarder la
        vidéo de sécurité suivante.</td>
    <td>For your safety and comfort, please take a moment to watch the following safety video.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Ce film concerne votre sécurité à bord. Merci de nous accorder votre attention.</td>
    <td>Ce film concerne votre sécurité à bord. Merci de nous accorder votre attention.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Chaque fois que ce signal est allumé, vous devez attacher votre ceinture pour votre sécurité.</td>
    <td>Chaque fois que ce signal est allumé, vous devez attacher votre ceinture pour votre sécurité.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Nous vous recommandons de la maintenir attachĂ©e de façon visible lorsque vous ĂȘtes Ă  votre siĂšge.</td>
    <td>Nous vous recommandons de la maintenir attachĂ©e, de façon visible, lorsque vous ĂȘtes Ă  votre siĂšge.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>Lorsque le signe de la selle est en place, votre selle doit ĂȘtre assise
        en sécurité. Pour votre sécurité, nous
        recommandons que vous gardiez votre selle assise et visible Ă  tous les temps en selle.</td>
    <td>Whenever the seatbelt sign is on, your seatbelt must be securely fastened. For your safety, we recommend that
      you keep your seatbelt fastened and visible at all times while seated.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>Pour détacher votre selleure, soulevez la partie supérieure de la
        boucle.</td>
    <td>To release the seatbelt, just lift the buckle.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>Pour détacher votre ceinture, soulevez la partie supérieure de la boucle.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Il est strictement interdit de fumer dans l'avion, y compris dans les toilettes.</td>
    <td>Il est strictement interdit de fumer dans l'avion, y compris dans les toilettes.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>This is a no-smoking flight, and it is strictly prohibited to smoke in the toilets.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>En cas de dépressurisation, un masque à oxygÚne tombera automatiquement à votre portée.</td>
    <td>En cas de dépressurisation, un masque à oxygÚne tombera automatiquement à votre portée.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>If there is a sudden decrease in cabin pressure, your oxygen mask will drop automatically in front of you.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Tirez sur le masque pour libérer l'oxygÚne, placez-le sur votre visage.</td>
    <td>Tirer sur le masque pour libérer l'oxygÚne, placez-le sur votre visage.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>Pull the mask toward you to start the flow of oxygen. Place the mask over your nose and mouth. Make sure your
      own mask is well adjusted before helping others.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Une fois votre masque ajusté, il vous sera possible d'aider d'autres personnes. En cas d'évacuation, des
      panneaux lumineux EXIT vous permettent de localiser les issues de secours. Repérez maintenant le panneau EXIT le
      plus proche de votre siĂšge. Il peut se trouver derriĂšre vous.</td>
    <td>Une fois votre masque ajusté, il vous sera possible d'aider d'autres personnes. En cas d'évacuation, des
      panneaux lumineux EXIT vous permettent de localiser les issues de secours. Repérez maintenant le panneau EXIT le
      plus proche de votre siĂšge. Il peut se trouver derriĂšre vous.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>En cas d'urgence, les signes d'exit illuminés vous aideront à locater
        les portes d'exit.</td>
    <td>In case of an emergency, the illuminated exit signs will help you locate the exit doors.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>S'il vous plaĂźt, prenez un moment pour locater l'exit le plus proche de
        vous. L'exit le plus proche peut ĂȘtre
        derriĂšre vous.</td>
    <td>Please take a moment now to locate the exit nearest you. The nearest exit may be behind you.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>Les issues de secours sont situées de chaque cÎté de la cabine, à l'avant, au centre, à l'arriÚre. <span
        style="background-color: yellow;">Ă  l'avant, au
        centre, Ă  l'arriĂšre.</td>
    <td>Les issues de secours sont situées de chaque cÎté de la cabine, à l'avant, au centre, à l'arriÚre.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>Emergency exits on each side of the cabin are located at the front, in the center, and at the rear.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Pour Ă©vacuer l'avion, suivez le marquage lumineux.</td>
    <td>Pour Ă©vacuer l'avion, suivez le marquage lumineux.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>In the event of an evacuation, pathway lighting on the floor will guide you to the exits.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Les portes seront ouvertes par l'Ă©quipage.</td>
    <td>Les portes seront ouvertes par l'Ă©quipage.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>Doors will be opened by the cabin crew.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Les toboggans se déploient automatiquement.</td>
    <td>Les toboggans se déploient automatiquement.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>The emergency slides will automatically inflate.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Le gilet de sauvetage est situé sous votre siÚge ou dans la coudoir centrale.</td>
    <td>Le gilet de sauvetage est situé sous votre siÚge ou dans la coudoir centrale.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>Your life jacket is under your seat or in the central armrest.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Passez la tĂȘte dans l'encolure, attachez et serrez les sangles.</td>
    <td>Passez la tĂȘte dans l'encolure, attachez et serrez les sangles.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>Place it over your head and pull the straps tightly around your waist. Inflate your life jacket by pulling the
      red toggles.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Une fois à l'extérieur de l'avion, gonflez votre gilet en tirant sur les poignées rouges.</td>
    <td>Une fois à l'extérieur de l'avion, gonflez votre gilet en tirant sur les poignées rouges.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>Faites-le seulement quand vous ĂȘtes Ă  l'extĂ©rieur de l'avion.
    </td>
    <td>Do this only when you are outside the aircraft.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Nous allons bientĂŽt dĂ©coller. La tablette doit ĂȘtre rangĂ©e et votre dossier redressĂ©.</td>
    <td>Nous allons bientĂŽt dĂ©coller. La tablette doit ĂȘtre rangĂ©e et votre dossier redressĂ©.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>In preparation for takeoff, please make sure your tray table is stowed and secure and that your seat back is in
      the upright position.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>L'usage des appareils électroniques est interite pendant le décollage et l'atterrissage.</td>
    <td>L'usage des appareils électroniques est interdit pendant le décollage et l'atterrissage.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>The use of electronic devices is prohibited during takeoff and landing.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Les téléphones portables doivent rester éteints pendant tout le vol.</td>
    <td>Les téléphones portables doivent rester éteints pendant tout le vol.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td></td>
    <td>Mobile phones must remain switched off for the duration of the flight.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Une notice de sécurité placée devant vous est à votre disposition.</td>
    <td>Une notice de sécurité placée devant vous est à votre disposition.</td>
  </tr>

  <tr>
    <td>❌</td>
    <td>Merci encourage everyone to read the safety information leaflet located
        in the seat back pocket.</td>
    <td>We encourage everyone to read the safety information leaflet located in the seat back pocket.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Merci pour votre attention. Nous vous souhaitons un bon vol.</td>
    <td>Merci pour votre attention. Nous vous souhaitons un bon vol.</td>
  </tr>

  <tr>
    <td>✅</td>
    <td>Thank you for your attention. We wish you a very pleasant flight.
    <td>Thank you for your attention. We wish you a very pleasant flight.</td>
  </tr>
</table>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/gaspardpetit/verbatim",
    "name": "verbatim",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "speech-to-text,audio processing,multilingual,natural language processing,automatic speech recognition,ASR,text transcription,language support,machine learning,deep learning,NLP,linguistics,voice recognition,PyTorch,TensorFlow,audio analysis,speech analytics,spoken language processing,i18n,internationalization",
    "author": "Gaspard Petit",
    "author_email": "gaspardpetit@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/fa/aa/7511df6e2b253bd5c6f94b941d84461eb9729d6d215c089d6733928eee26/verbatim-0.1.6.tar.gz",
    "platform": null,
    "description": "[![PyPI version](https://badge.fury.io/py/verbatim.svg?)](https://pypi.python.org/pypi/verbatim/)\n[![Python versions](https://img.shields.io/pypi/pyversions/verbatim.svg)](https://pypi.org/project/verbatim/)\n[![Bandit](https://github.com/gaspardpetit/verbatim/actions/workflows/bandit.yml/badge.svg)](https://github.com/gaspardpetit/verbatim/actions/workflows/bandit.yml)\n[![Pylint](https://github.com/gaspardpetit/verbatim/actions/workflows/pylint.yml/badge.svg)](https://github.com/gaspardpetit/verbatim/actions/workflows/pylint.yml)\n[![Python package](https://github.com/gaspardpetit/verbatim/actions/workflows/python-package.yml/badge.svg)](https://github.com/gaspardpetit/verbatim/actions/workflows/python-package.yml)\n\n# Verbatim\n\nFor high quality multi-lingual speech to text.\n\n## Installation\n\nInstall from PyPI:\n```\npip install verbatim\n```\n\nInstall the latest from git:\n```\npip install git+https://github.com/gaspardpetit/verbatim.git\n```\n\n## HuggingFace Token\nThis project requires access to the pyannote models which are gated:\n1. Create an account on [Hugging Face](https://huggingface.co/)\n2. Request access to the model at https://huggingface.co/pyannote/speaker-diarization-3.1\n3. Request access to the model at https://huggingface.co/pyannote/segmentation-3.0\n4. From your `Settings` > `Access Tokens`, generate an access token\n5. When running verbatim for the first time, set the `TOKEN_HUGGINGFACE` environment variable to your Hugging Face token. Once the model is downloaded, this is no longer necessary.\n\n\n## Usage (from terminal)\n\nSimple usage\n```bash\nverbatim audio_file.mp3\n```\n\nVerbose\n```bash\nverbatim audio_file.mp3 -v\n```\n\nVery Verbose\n```bash\nverbatim audio_file.mp3 -vv\n```\n\nForce CPU only\n```bash\nverbatim audio_file.mp3 --cpu\n```\n\nSave file in a specific directory\n```bash\nverbatim audio_file.mp3 -o ./output/\n```\n\n\n## Usage (from Docker)\nThe tool can also be used within a docker container. This can be particularly convenient, in the context where the audio and transcription is confidential, to ensure that the tool is completely offline since docker using `--network none`\n\nWith GPU support\n```bash\ndocker run --network none --shm-size 8G --gpus all \\\n    -v \"/local/path/to/out/:/data/out/\" \\\n    -v \"/local/path/to/audio.mp3:/data/audio.mp3\" ghcr.io/gaspardpetit/verbatim:latest \\\n    verbatim /data/audio.mp3 -o /data/out --language en fr\"\n```\n\nWithout GPU support\n```bash\ndocker run --network none \\\n    -v \"/local/path/to/out/:/data/out/\" \\\n    -v \"/local/path/to/audio.mp3:/data/audio.mp3\" ghcr.io/gaspardpetit/verbatim:latest \\\n    verbatim /data/audio.mp3 -o /data/out --language en fr\"\n```\n\n\n## Usage (from python)\n\n```python \nfrom verbatim import Context, Pipeline\ncontext: Context = Context(\n    languages=[\"en\", \"fr\"],\n    nb_speakers=2,\n    source_file=\"audio.mp3\",\n    out_dir=\"out\")\npipeline: Pipeline = Pipeline(context=context)\npipeline.execute()\n```\n\nThe project is organized to be modular, such that individual components can be used outside of the full pipeline, and the pipeline can be customized to use custom stages. For example, to use a custom diarization stage:\n\n\n```python\nfrom verbatim.speaker_diarization import DiarizeSpeakers\nfrom verbatim import Context, Pipeline\nmy_cursom_diarization: DiarizeSpeakers = get_custom_diarization_stage()  \n\ncontext: Context = Context(\n    languages=[\"en\", \"fr\"],\n    nb_speakers=2,\n    source_file=\"audio.mp3\",\n    out_dir=\"out\")\npipeline: Pipeline = Pipeline(\n    context=context, \n    diarize_speakers=my_cursom_diarization)\npipeline.execute()\n```\n\nThis project aims at finding the best implementation for each stage and glue them together. Contributions with new implementations are welcome.\n\nEach component may also be used independently, for example:\n\n#### Separating Voice from Noise\n\nUsing MDX:\n```python\nfrom verbatim.voice_isolation import IsolateVoicesMDX\nIsolateVoicesMDX().execute(\n    audio_file_path=\"original.mp3\" \n    voice_file_path=\"voice.wav\")\n```\n\nUsing Demucs:\n```python\nfrom verbatim.voice_isolation import IsolateVoicesDemucs\nIsolateVoicesDemucs().execute(\n    audio_file_path=\"original.mp3\" \n    voice_file_path=\"voice.wav\")\n```\n\n#### Diarization\nUsing Pyannote:\n```python\nfrom verbatim.speaker_diarization import DiarizeSpeakersPyannote\nDiarizeSpeakersPyannote().execute(\n    voice_file_path=\"voice.wav\", \n    diarization_file=\"dia.rttm\",\n    max_speakers=4)\n```\n\nUsing SpeechBrain:\n```python\nfrom verbatim.speaker_diarization import DiarizeSpeakersSpeechBrain\nDiarizeSpeakersSpeechBrain().execute(\n    voice_file_path=\"voice.wav\", \n    diarization_file=\"dia.rttm\",\n    max_speakers=4)\n```\n\n#### Speech to Text\n\nUsing FasterWhisper:\n```python\nfrom verbatim.wav_conversion import ConvertToWav\nfrom verbatim.speech_transcription import TranscribeSpeechFasterWhisper\nTranscribeSpeechFasterWhisper().execute_segment(\n        speech_segment_float32_16khz=ConvertToWav.load_float32_16khz_mono_audio(\"audio.mp3\"),\n        language=\"fr\")\n```\n\nUsing OpenAI Whisper:\n```python\nfrom verbatim.wav_conversion import ConvertToWav\nfrom verbatim.speech_transcription import TranscribeSpeechWhisper\ntranscript = TranscribeSpeechWhisper().execute_segment(\n    speech_segment_float32_16khz=ConvertToWav.load_float32_16khz_mono_audio(\"audio.mp3\"),\n    language=\"fr\")\n```\n\n#### Transcription to Document\n\nSaving to .docx:\n```python\nfrom verbatim.transcript_writing import WriteTranscriptDocx\nWriteTranscriptDocx().execute(\n    transcript=transcript,\n    output_file=\"out.docx\")\n```\n\nSaving to .ass:\n```python\nfrom verbatim.transcript_writing import WriteTranscriptAss\nWriteTranscriptAss().execute(\n    transcript=transcript,\n    output_file=\"out.ass\")\n```\n\n## Objectives\n\n### High Quality\nMany design decisions favour higher confidence over performance, including multiple passes in several parts to improve analysis.\n\n### Language support\n\nLanguages supported by [openai/whisper](https://github.com/openai/whisper) using the [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) model should also work, including: Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Marathi, Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh\n\n### Mixed language support\nSpeeches may comprise multiple languages. This includes different languages spoken one after the other (ex. two speakers alternating two languages) or multiple languages being mixed, such as the use of English expressions within a French speech.\n\n### Speaker Identification\nThe speech recognition distinguishes between speakers using diarization based on  [pyannote](https://github.com/pyannote). \n\n### Word-Level Confidence\nThe output provides word-level confidence, with poorly recognized words clearly identified to guide manual editing.\n\n### Time Tracking\nThe output text is associated with timestamps to facilitate source audio navigation when manually editing.\n\n### Voice Isolation\nVerbatim will work on unclean audio sources, for example where there might be music, key strokes from keyboards, background noise, etc. Voices are isolated from other sounds using [adefossez/demucs](https://github.com/adefossez/demucs).\n\nFor audit purposes, the audio that was removed because it was considered *background* noise is saved so it can be manually reviewed if necessary.\n\n### Optional GPU Acceleration (on a 12GB VRAM Budget)\nThe current objective is to limit the VRAM requirements to 12GB, allowing cards such as NVidia RTX 4070 to accelerate the processing.\n\nVerbatim will run on CPU, but processing should be expected to be slow.\n\n### Long Audio Support (2h+)\nThe main use case for Verbatim is transcription of meeting. Consequently, it is designed to work with files containing at least 2 hours of audio.\n\n### Audio Conversion\nA variety of audio formats is support as input, including raw, compressed audio or even video files containing audio tracks. Any format supported by [ffmpeg](https://ffmpeg.org/) is accepted.\n\n### Offline processing\n100% offline to ensure confidentiality. The docker image may be executed with `--network none` to ensure that nothing reaches out.\n\n### Output designed for auditing\nThe output includes \n- a subtitle track rendered over the original audio to review the results.\n- a Word document identifying low-confidence words, speaker and timestamps to quickly jump to relevant sections and ensure no part has been omitted\n\n## Processing Pipeline\n\n![doc/architecture.svg](doc/img/Architecture.svg)\n\n### 1. Injestion \ud83d\udd0a\nAudio Files are converted \u25cc\u20ef to raw audio using [ffmpeg](https://ffmpeg.org/). \n\n### 2. Voice Isolation \ud83d\udde9\n\nThe voices are isolated using [karaokenerds/python-audio-separator](https://github.com/karaokenerds/python-audio-separator).\n\n### 3. Diarization \ud83d\uddb9\n\nSpeakers are identified using [pyannote](https://github.com/pyannote). A diarizaton timeline is created with each speaker being assigned speech periods. When known, it is possible to set the number of speaker in advance for better results.\n\n### 4. Language detection\n\nThe language used in each section of the diarization is identified using [SYSTRAN/faster-whisper](https://github.com/SYSTRAN/faster-whisper). For sections that fail to detect properly, the process is repeated with widening windows until the language can be determined with an acceptable level of certainty.\n\n### 5. Speech to Text \u270e\n\nWe use [SYSTRAN/faster-whisper](https://github.com/SYSTRAN/faster-whisper) for translation, using the [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) model which support mixture of language. It is still necessary to segment the audio, otherwise whisper eventually switches to translating instead of transcribing when the language requested does not match the speech.\n\nWhisper provides state of the art transcription, but it is prone to hallucinations. A short audio segment may generate speech that does not exist with high level of certainty, making hallucinations difficult to detect. To reduce the likelihood of these occuranges, the audio track is split into multiple audio tracks, one for each `speaker`x`language` pair. Voice activity detection (VAD) is then performed using [speechbrain](https://github.com/speechbrain/speechbrain) to identify large audio segments that can be processed together without compromising word timestamp quality.\n\nWe use a different VAD for speaker diarization than speech-to-text processing. [pyannote](https://github.com/pyannote)'s VAD seemed more granular and better suited to identify short segments that may involve change in language or speaker, while [speechbrain](https://github.com/speechbrain/speechbrain)'s VAD seems more conservative, preferring larger segments, making it better suited for grouping large audio segments for speech-to-text while still allowing to skip large sections of silence.\n\n### 6. Output\n\nThe output document is a Microsoft Word document which reflects many decisions of the pipeline. In particular, words with low confidence are highlighted for review. SubStation Alpha Subtitles are also provided, based on the implementation of [jianfch/stable-ts](https://github.com/jianfch/stable-ts).\n\n## Sample\n\nConsider the following audio file obtained from [universal-soundbank](https://universal-soundbank.com/sounds/12374.mp3) including a mixture of French and English:\n\n\n\nhttps://github.com/gaspardpetit/verbatim/assets/9883156/23bc86d2-567e-4be3-8d79-ba625be8c614\n\n\n\nFirst, we extract the background audio and remove it from the analysis:\n\n**Background noise:**\n\nhttps://github.com/gaspardpetit/verbatim/assets/9883156/42fad911-3c15-45c2-a40a-7f923fdd4533\n\nThen we perform diarization and language detection. We correctly detect one speaker speaking in French and another one speaking in English:\n\n**Speaker 0 | English:**\n\nhttps://github.com/gaspardpetit/verbatim/assets/9883156/cecec5aa-cb09-473e-bf9b-c5fd82352dab\n\n**Speaker 1 | French:**\n\nhttps://github.com/gaspardpetit/verbatim/assets/9883156/8074c064-f4d2-4ec4-8fc0-c985f7c276e8\n\nThe output consists of a word document highlighting words with low certainty (low certainty are underlined and highlighted in yellow, while medium certainty are simply underlined):\n\n![Microsoft Word Output](doc/img/word_output.png)\n\nA subtitle file is also provided and can be attached to the original audio:\n\nhttps://github.com/gaspardpetit/verbatim/assets/9883156/9bcc2553-f183-4def-a9c4-bb0c337d4c82\n\nA direct use of whisper on an audio clip like this one results in many errors. Several utterances end up being translated instead of being transcribed, and others are simply unrecognized and missing:\n\n<table>\n  <tr>\n    <th></th>\n    <th><b>Naive Whisper Transcription</b></th>\n    <th><b>Verbatim Transcription</b></th>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Madame, Monsieur, bonjour et bienvenue \u00e0 bord.</td>\n    <td>Madame, Monsieur, bonjour et bienvenue \u00e0 bord.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>Bienvenue \u00e0 bord, Mesdames et Messieurs.</td>\n    <td>Welcome aboard, ladies and gentlemen.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>Pour votre s\u00e9curit\u00e9 et votre confort, prenez un moment pour regarder la\n        vid\u00e9o de s\u00e9curit\u00e9 suivante.</td>\n    <td>For your safety and comfort, please take a moment to watch the following safety video.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Ce film concerne votre s\u00e9curit\u00e9 \u00e0 bord. Merci de nous accorder votre attention.</td>\n    <td>Ce film concerne votre s\u00e9curit\u00e9 \u00e0 bord. Merci de nous accorder votre attention.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Chaque fois que ce signal est allum\u00e9, vous devez attacher votre ceinture pour votre s\u00e9curit\u00e9.</td>\n    <td>Chaque fois que ce signal est allum\u00e9, vous devez attacher votre ceinture pour votre s\u00e9curit\u00e9.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Nous vous recommandons de la maintenir attach\u00e9e de fa\u00e7on visible lorsque vous \u00eates \u00e0 votre si\u00e8ge.</td>\n    <td>Nous vous recommandons de la maintenir attach\u00e9e, de fa\u00e7on visible, lorsque vous \u00eates \u00e0 votre si\u00e8ge.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>Lorsque le signe de la selle est en place, votre selle doit \u00eatre assise\n        en s\u00e9curit\u00e9. Pour votre s\u00e9curit\u00e9, nous\n        recommandons que vous gardiez votre selle assise et visible \u00e0 tous les temps en selle.</td>\n    <td>Whenever the seatbelt sign is on, your seatbelt must be securely fastened. For your safety, we recommend that\n      you keep your seatbelt fastened and visible at all times while seated.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>Pour d\u00e9tacher votre selleure, soulevez la partie sup\u00e9rieure de la\n        boucle.</td>\n    <td>To release the seatbelt, just lift the buckle.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>Pour d\u00e9tacher votre ceinture, soulevez la partie sup\u00e9rieure de la boucle.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Il est strictement interdit de fumer dans l'avion, y compris dans les toilettes.</td>\n    <td>Il est strictement interdit de fumer dans l'avion, y compris dans les toilettes.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>This is a no-smoking flight, and it is strictly prohibited to smoke in the toilets.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>En cas de d\u00e9pressurisation, un masque \u00e0 oxyg\u00e8ne tombera automatiquement \u00e0 votre port\u00e9e.</td>\n    <td>En cas de d\u00e9pressurisation, un masque \u00e0 oxyg\u00e8ne tombera automatiquement \u00e0 votre port\u00e9e.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>If there is a sudden decrease in cabin pressure, your oxygen mask will drop automatically in front of you.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Tirez sur le masque pour lib\u00e9rer l'oxyg\u00e8ne, placez-le sur votre visage.</td>\n    <td>Tirer sur le masque pour lib\u00e9rer l'oxyg\u00e8ne, placez-le sur votre visage.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>Pull the mask toward you to start the flow of oxygen. Place the mask over your nose and mouth. Make sure your\n      own mask is well adjusted before helping others.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Une fois votre masque ajust\u00e9, il vous sera possible d'aider d'autres personnes. En cas d'\u00e9vacuation, des\n      panneaux lumineux EXIT vous permettent de localiser les issues de secours. Rep\u00e9rez maintenant le panneau EXIT le\n      plus proche de votre si\u00e8ge. Il peut se trouver derri\u00e8re vous.</td>\n    <td>Une fois votre masque ajust\u00e9, il vous sera possible d'aider d'autres personnes. En cas d'\u00e9vacuation, des\n      panneaux lumineux EXIT vous permettent de localiser les issues de secours. Rep\u00e9rez maintenant le panneau EXIT le\n      plus proche de votre si\u00e8ge. Il peut se trouver derri\u00e8re vous.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>En cas d'urgence, les signes d'exit illumin\u00e9s vous aideront \u00e0 locater\n        les portes d'exit.</td>\n    <td>In case of an emergency, the illuminated exit signs will help you locate the exit doors.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>S'il vous pla\u00eet, prenez un moment pour locater l'exit le plus proche de\n        vous. L'exit le plus proche peut \u00eatre\n        derri\u00e8re vous.</td>\n    <td>Please take a moment now to locate the exit nearest you. The nearest exit may be behind you.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>Les issues de secours sont situ\u00e9es de chaque c\u00f4t\u00e9 de la cabine, \u00e0 l'avant, au centre, \u00e0 l'arri\u00e8re. <span\n        style=\"background-color: yellow;\">\u00e0 l'avant, au\n        centre, \u00e0 l'arri\u00e8re.</td>\n    <td>Les issues de secours sont situ\u00e9es de chaque c\u00f4t\u00e9 de la cabine, \u00e0 l'avant, au centre, \u00e0 l'arri\u00e8re.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>Emergency exits on each side of the cabin are located at the front, in the center, and at the rear.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Pour \u00e9vacuer l'avion, suivez le marquage lumineux.</td>\n    <td>Pour \u00e9vacuer l'avion, suivez le marquage lumineux.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>In the event of an evacuation, pathway lighting on the floor will guide you to the exits.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Les portes seront ouvertes par l'\u00e9quipage.</td>\n    <td>Les portes seront ouvertes par l'\u00e9quipage.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>Doors will be opened by the cabin crew.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Les toboggans se d\u00e9ploient automatiquement.</td>\n    <td>Les toboggans se d\u00e9ploient automatiquement.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>The emergency slides will automatically inflate.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Le gilet de sauvetage est situ\u00e9 sous votre si\u00e8ge ou dans la coudoir centrale.</td>\n    <td>Le gilet de sauvetage est situ\u00e9 sous votre si\u00e8ge ou dans la coudoir centrale.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>Your life jacket is under your seat or in the central armrest.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Passez la t\u00eate dans l'encolure, attachez et serrez les sangles.</td>\n    <td>Passez la t\u00eate dans l'encolure, attachez et serrez les sangles.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>Place it over your head and pull the straps tightly around your waist. Inflate your life jacket by pulling the\n      red toggles.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Une fois \u00e0 l'ext\u00e9rieur de l'avion, gonflez votre gilet en tirant sur les poign\u00e9es rouges.</td>\n    <td>Une fois \u00e0 l'ext\u00e9rieur de l'avion, gonflez votre gilet en tirant sur les poign\u00e9es rouges.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>Faites-le seulement quand vous \u00eates \u00e0 l'ext\u00e9rieur de l'avion.\n    </td>\n    <td>Do this only when you are outside the aircraft.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Nous allons bient\u00f4t d\u00e9coller. La tablette doit \u00eatre rang\u00e9e et votre dossier redress\u00e9.</td>\n    <td>Nous allons bient\u00f4t d\u00e9coller. La tablette doit \u00eatre rang\u00e9e et votre dossier redress\u00e9.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>In preparation for takeoff, please make sure your tray table is stowed and secure and that your seat back is in\n      the upright position.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>L'usage des appareils \u00e9lectroniques est interite pendant le d\u00e9collage et l'atterrissage.</td>\n    <td>L'usage des appareils \u00e9lectroniques est interdit pendant le d\u00e9collage et l'atterrissage.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>The use of electronic devices is prohibited during takeoff and landing.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Les t\u00e9l\u00e9phones portables doivent rester \u00e9teints pendant tout le vol.</td>\n    <td>Les t\u00e9l\u00e9phones portables doivent rester \u00e9teints pendant tout le vol.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td></td>\n    <td>Mobile phones must remain switched off for the duration of the flight.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Une notice de s\u00e9curit\u00e9 plac\u00e9e devant vous est \u00e0 votre disposition.</td>\n    <td>Une notice de s\u00e9curit\u00e9 plac\u00e9e devant vous est \u00e0 votre disposition.</td>\n  </tr>\n\n  <tr>\n    <td>\u274c</td>\n    <td>Merci encourage everyone to read the safety information leaflet located\n        in the seat back pocket.</td>\n    <td>We encourage everyone to read the safety information leaflet located in the seat back pocket.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Merci pour votre attention. Nous vous souhaitons un bon vol.</td>\n    <td>Merci pour votre attention. Nous vous souhaitons un bon vol.</td>\n  </tr>\n\n  <tr>\n    <td>\u2705</td>\n    <td>Thank you for your attention. We wish you a very pleasant flight.\n    <td>Thank you for your attention. We wish you a very pleasant flight.</td>\n  </tr>\n</table>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "high quality multi-lingual speech to text",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "https://github.com/gaspardpetit/verbatim"
    },
    "split_keywords": [
        "speech-to-text",
        "audio processing",
        "multilingual",
        "natural language processing",
        "automatic speech recognition",
        "asr",
        "text transcription",
        "language support",
        "machine learning",
        "deep learning",
        "nlp",
        "linguistics",
        "voice recognition",
        "pytorch",
        "tensorflow",
        "audio analysis",
        "speech analytics",
        "spoken language processing",
        "i18n",
        "internationalization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8a74ea4de8a324969843d4519489278f4be2038c5ded6d01435c27ee2368bc94",
                "md5": "9b60e1df5a98a66393e27151f29e6c35",
                "sha256": "a1b046596dbf7b504945e69bb57cddf403110d15473ae71cd18488937084d689"
            },
            "downloads": -1,
            "filename": "verbatim-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9b60e1df5a98a66393e27151f29e6c35",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 54672,
            "upload_time": "2024-02-02T06:17:21",
            "upload_time_iso_8601": "2024-02-02T06:17:21.901192Z",
            "url": "https://files.pythonhosted.org/packages/8a/74/ea4de8a324969843d4519489278f4be2038c5ded6d01435c27ee2368bc94/verbatim-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "faaa7511df6e2b253bd5c6f94b941d84461eb9729d6d215c089d6733928eee26",
                "md5": "8c16e419bc6fec919416987598c201f4",
                "sha256": "e8916871faa25f10fa416b5af4381ed09287a7ae4eb519f6cfe1751cab67ac7c"
            },
            "downloads": -1,
            "filename": "verbatim-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "8c16e419bc6fec919416987598c201f4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 45579,
            "upload_time": "2024-02-02T06:17:23",
            "upload_time_iso_8601": "2024-02-02T06:17:23.259158Z",
            "url": "https://files.pythonhosted.org/packages/fa/aa/7511df6e2b253bd5c6f94b941d84461eb9729d6d215c089d6733928eee26/verbatim-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-02 06:17:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "gaspardpetit",
    "github_project": "verbatim",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "wheel",
            "specs": []
        },
        {
            "name": "setuptools",
            "specs": []
        }
    ],
    "lcname": "verbatim"
}
        
Elapsed time: 0.18332s