transcribe-align-textgrid


Nametranscribe-align-textgrid JSON
Version 0.2.2 PyPI version JSON
download
home_pageNone
SummaryCreate for-aligned transcription TextGrids from Audio
upload_time2024-09-10 14:25:45
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords praat whisper force-align textgrid
VCS
bugtrack_url
requirements praatio jsonschema whisper-timestamped
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Transcribe align TextGrid

A small wrapper package around [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped). Create force-aligned transcription TextGrids from raw audio.

## Installation

### Requirements

- `Python3.8` and above.
  - Use the executable `python3.x` on Unix, available in most package managers, or `py -3.x` on Windows.
  - This command line executable will be referred to as `[python-executable]` for the rest of the instructions
  - Install pip on old python versions with `[python-executable] -m ensurepip --default-pip`
- `ffmpeg` Usually preinstalled on Linux. For Windows see instructions for installation on the [whisper repository](https://github.com/openai/whisper)

### Installing Torch

Torch, on which Whisper is built, is quite a low-level library, meaning which version you'll need depends on your OS and type of GPU. On Mac and Windows, pip will by default install a non-accelerated CPU version of the library. If you are on Linux, it will presume you have a CUDA-capable (which is to say Nvidia branded) GPU. If you are on Windows and have an Nvidia GPU you can use, or are on Linux and either do not have a GPU or have an AMD GPU, you should check out the more detailed torch installation instructions [here](https://pytorch.org/get-started/locally/).

This should be done _before_ installing `transcribe_align_textgrid` and `whisper_timestamped`.

### Installing

Once the requirements are satisfied, you can install whisper-timestamped and this package:

```bash
[python-executable] -m pip install transcribe_align_textgrid
```

## Running from the command line

Once the application is installed, you can run it with:

```bash
[python-executable] -m transcribe_align_textgrid [path]
```

here `path` is the path to the audio files.

- If a directory path is passed, all audio files in the directory will be transcribed, and force-aligned transcription TextGrids of the same name will be generated in this directory.
- If a file path is passed, a force-aligned transcription TextGrid will be generated into the same directory with the same name as the original file.
- If a glob is passed, the glob will be resolved and all matches will be processed as if the files were passed individually
- By default, if a non-audio file is passed, an error is raised. To skip those instead, pass the `--skip` flag.

### Selecting a different model

By default, this will run on the smallest, that is, least accurate and fastest, model, `tiny`. To run with another model, pass it as an argument:

```bash
[python-executable] -m transcribe_align_textgrid [path] --model [model]
```

The available models are:

|  name  | Parameters | Required VRAM | Relative speed |
| :----: | :--------: | :-----------: | :------------: |
|  tiny  |    39 M    |     ~1 GB     |      ~32x      |
|  base  |    74 M    |     ~1 GB     |      ~16x      |
| small  |   244 M    |     ~2 GB     |      ~6x       |
| medium |   769 M    |     ~5 GB     |      ~2x       |
| large  |   1550 M   |    ~10 GB     |       1x       |

### Specifying what language to use

By default, the application will try to detect what language is used automatically. However, you can also specify this manually:

```bash
[python-executable] -m transcribe_align_textgrid [path] --language [language]

# Or also specifying what model to use:
[python-executable] -m transcribe_align_textgrid [path] --model [model] --language [language]
```

To see what languages are available, please see the [tokenizer.py](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py) file in the Whisper source (Yes, the OpenAI team themselves recommends finding it this way, too.)

## Using as a library

The tool can also be used as a library. It exports one function: `whisper_to_textgrid()` Which takes in a transcription object (nested dictionary) from [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) and returns a Textgrid object from [praatio](https://github.com/timmahrt/praatIO). The typical Json output from whisper-timestamped works, too.

### Output

The output TextGrids have four TextGridTiers:

- `segments_text` The text in a given segment (Speaker's turn)
- `segments_confidence` The confidence the model has that this is the correct labelling and segmentation for the segment
- `words_text` The text of a given word
- `words_confidence` The confidence the model has that this is the current labelling and segmentation for this word.

If one of these tiers would have been empty per the output of whisper-timestamped, to satisfy Praat's error handling, a tier with an empty interval (0.0, 0.1) is generated.

In praat, it will look a little like this:

<p align="center">
  <img src=".assets/sample_output.png" />
</p>

## Development

The package is quite trivial, but, if you want to work on it, here are some instructions

### Style

All code is formatted with the [Black](https://github.com/psf/black) code-formatter. As for casing, python standards are used except in cases where dependencies don't.

I am dyslectic, and quite likely to make spelling errors in variables. If you find any, don't hesitate to send me a pull request!

### Running Tests

After cloning the repository, moving into it, and installing `pytest` and `pytest-cov` with pip, run tests with:

```bash
# Install the current version of the package locally to be able to test it.
[python-executable] -m pip install -e .

[python-executable] -m pytest --cov=transcribe_align_textgrid tests/
```

To test the CLI, there are audio files in `./tests/audio/` to run on. For example:

```bash
[python-executable] -m transcribe_align_textgrid ./tests/audio/*.mp3
```

Since this relies on the stochastic models of Torch, it is not expected that the output between runs is ever fully equal, but they can be visually compared with the expected outputs from the `./tests/expected/` directory.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "transcribe-align-textgrid",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "praat, whisper, force-align, TextGrid",
    "author": null,
    "author_email": "JJWRoeloffs <jelleroeloffs@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/4b/29/92fe2d6f86b4da49ca49f2284d0b990721a9fd8d32c0e30f340e1b94a8fd/transcribe_align_textgrid-0.2.2.tar.gz",
    "platform": null,
    "description": "# Transcribe align TextGrid\n\nA small wrapper package around [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped). Create force-aligned transcription TextGrids from raw audio.\n\n## Installation\n\n### Requirements\n\n- `Python3.8` and above.\n  - Use the executable `python3.x` on Unix, available in most package managers, or `py -3.x` on Windows.\n  - This command line executable will be referred to as `[python-executable]` for the rest of the instructions\n  - Install pip on old python versions with `[python-executable] -m ensurepip --default-pip`\n- `ffmpeg` Usually preinstalled on Linux. For Windows see instructions for installation on the [whisper repository](https://github.com/openai/whisper)\n\n### Installing Torch\n\nTorch, on which Whisper is built, is quite a low-level library, meaning which version you'll need depends on your OS and type of GPU. On Mac and Windows, pip will by default install a non-accelerated CPU version of the library. If you are on Linux, it will presume you have a CUDA-capable (which is to say Nvidia branded) GPU. If you are on Windows and have an Nvidia GPU you can use, or are on Linux and either do not have a GPU or have an AMD GPU, you should check out the more detailed torch installation instructions [here](https://pytorch.org/get-started/locally/).\n\nThis should be done _before_ installing `transcribe_align_textgrid` and `whisper_timestamped`.\n\n### Installing\n\nOnce the requirements are satisfied, you can install whisper-timestamped and this package:\n\n```bash\n[python-executable] -m pip install transcribe_align_textgrid\n```\n\n## Running from the command line\n\nOnce the application is installed, you can run it with:\n\n```bash\n[python-executable] -m transcribe_align_textgrid [path]\n```\n\nhere `path` is the path to the audio files.\n\n- If a directory path is passed, all audio files in the directory will be transcribed, and force-aligned transcription TextGrids of the same name will be generated in this directory.\n- If a file path is passed, a force-aligned transcription TextGrid will be generated into the same directory with the same name as the original file.\n- If a glob is passed, the glob will be resolved and all matches will be processed as if the files were passed individually\n- By default, if a non-audio file is passed, an error is raised. To skip those instead, pass the `--skip` flag.\n\n### Selecting a different model\n\nBy default, this will run on the smallest, that is, least accurate and fastest, model, `tiny`. To run with another model, pass it as an argument:\n\n```bash\n[python-executable] -m transcribe_align_textgrid [path] --model [model]\n```\n\nThe available models are:\n\n|  name  | Parameters | Required VRAM | Relative speed |\n| :----: | :--------: | :-----------: | :------------: |\n|  tiny  |    39 M    |     ~1 GB     |      ~32x      |\n|  base  |    74 M    |     ~1 GB     |      ~16x      |\n| small  |   244 M    |     ~2 GB     |      ~6x       |\n| medium |   769 M    |     ~5 GB     |      ~2x       |\n| large  |   1550 M   |    ~10 GB     |       1x       |\n\n### Specifying what language to use\n\nBy default, the application will try to detect what language is used automatically. However, you can also specify this manually:\n\n```bash\n[python-executable] -m transcribe_align_textgrid [path] --language [language]\n\n# Or also specifying what model to use:\n[python-executable] -m transcribe_align_textgrid [path] --model [model] --language [language]\n```\n\nTo see what languages are available, please see the [tokenizer.py](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py) file in the Whisper source (Yes, the OpenAI team themselves recommends finding it this way, too.)\n\n## Using as a library\n\nThe tool can also be used as a library. It exports one function: `whisper_to_textgrid()` Which takes in a transcription object (nested dictionary) from [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) and returns a Textgrid object from [praatio](https://github.com/timmahrt/praatIO). The typical Json output from whisper-timestamped works, too.\n\n### Output\n\nThe output TextGrids have four TextGridTiers:\n\n- `segments_text` The text in a given segment (Speaker's turn)\n- `segments_confidence` The confidence the model has that this is the correct labelling and segmentation for the segment\n- `words_text` The text of a given word\n- `words_confidence` The confidence the model has that this is the current labelling and segmentation for this word.\n\nIf one of these tiers would have been empty per the output of whisper-timestamped, to satisfy Praat's error handling, a tier with an empty interval (0.0, 0.1) is generated.\n\nIn praat, it will look a little like this:\n\n<p align=\"center\">\n  <img src=\".assets/sample_output.png\" />\n</p>\n\n## Development\n\nThe package is quite trivial, but, if you want to work on it, here are some instructions\n\n### Style\n\nAll code is formatted with the [Black](https://github.com/psf/black) code-formatter. As for casing, python standards are used except in cases where dependencies don't.\n\nI am dyslectic, and quite likely to make spelling errors in variables. If you find any, don't hesitate to send me a pull request!\n\n### Running Tests\n\nAfter cloning the repository, moving into it, and installing `pytest` and `pytest-cov` with pip, run tests with:\n\n```bash\n# Install the current version of the package locally to be able to test it.\n[python-executable] -m pip install -e .\n\n[python-executable] -m pytest --cov=transcribe_align_textgrid tests/\n```\n\nTo test the CLI, there are audio files in `./tests/audio/` to run on. For example:\n\n```bash\n[python-executable] -m transcribe_align_textgrid ./tests/audio/*.mp3\n```\n\nSince this relies on the stochastic models of Torch, it is not expected that the output between runs is ever fully equal, but they can be visually compared with the expected outputs from the `./tests/expected/` directory.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Create for-aligned transcription TextGrids from Audio",
    "version": "0.2.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/JJWRoeloffs/transcribe_align_textgrid/issues",
        "Homepage": "https://github.com/JJWRoeloffs/transcribe_align_textgrid"
    },
    "split_keywords": [
        "praat",
        " whisper",
        " force-align",
        " textgrid"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d862ebce9cc68138663066db9fcef8496152c1a01b5d99dbe67a154d07f51b1f",
                "md5": "f7f36b586827dc44e67ef1e3a44b3091",
                "sha256": "26dd28925e066cb3f6d192daa123d851b3d61b0e7a7ae34ac6d84ea0b017c9f7"
            },
            "downloads": -1,
            "filename": "transcribe_align_textgrid-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f7f36b586827dc44e67ef1e3a44b3091",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 20315,
            "upload_time": "2024-09-10T14:25:44",
            "upload_time_iso_8601": "2024-09-10T14:25:44.363051Z",
            "url": "https://files.pythonhosted.org/packages/d8/62/ebce9cc68138663066db9fcef8496152c1a01b5d99dbe67a154d07f51b1f/transcribe_align_textgrid-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4b2992fe2d6f86b4da49ca49f2284d0b990721a9fd8d32c0e30f340e1b94a8fd",
                "md5": "8abe04267fcb2a521adca40866bdc6e1",
                "sha256": "99181e624f6c6d8ecbaee4c5ae6d4e6d133a59094922cdabd8a7da396e5f5299"
            },
            "downloads": -1,
            "filename": "transcribe_align_textgrid-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "8abe04267fcb2a521adca40866bdc6e1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 21455,
            "upload_time": "2024-09-10T14:25:45",
            "upload_time_iso_8601": "2024-09-10T14:25:45.794927Z",
            "url": "https://files.pythonhosted.org/packages/4b/29/92fe2d6f86b4da49ca49f2284d0b990721a9fd8d32c0e30f340e1b94a8fd/transcribe_align_textgrid-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-10 14:25:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "JJWRoeloffs",
    "github_project": "transcribe_align_textgrid",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "praatio",
            "specs": [
                [
                    "~=",
                    "6.2"
                ]
            ]
        },
        {
            "name": "jsonschema",
            "specs": [
                [
                    "~=",
                    "4.23"
                ]
            ]
        },
        {
            "name": "whisper-timestamped",
            "specs": [
                [
                    "~=",
                    "1.15"
                ]
            ]
        }
    ],
    "lcname": "transcribe-align-textgrid"
}
        
Elapsed time: 4.96911s