emphases


Nameemphases JSON
Version 0.0.2 PyPI version JSON
download
home_pagehttps://github.com/interactiveaudiolab/emphases
SummaryCrowdsourced and Automatic Speech Prominence Estimation
upload_time2024-04-12 23:47:12
maintainerNone
docs_urlNone
authorInteractive Audio Lab
requires_pythonNone
licenseMIT
keywords annotatation audio emphasis prominence speech
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">Crowdsourced and Automatic Speech Prominence Estimation</h1>
<div align="center">

[![PyPI](https://img.shields.io/pypi/v/emphases.svg)](https://pypi.python.org/pypi/emphases)
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Downloads](https://static.pepy.tech/badge/emphases)](https://pepy.tech/project/emphases)

Annotation, training, evaluation and inference of speech prominence

[Paper](https://www.maxrmorrison.com/pdfs/morrison2024crowdsourced.pdf) [Website](https://www.maxrmorrison.com/sites/prominence-estimation) [Dataset](https://zenodo.org/records/10402793)

</div>


## Table of contents

- [Installation](#installation)
- [Inference](#inference)
    * [Application programming interface](#application-programming-interface)
        * [`emphases.from_alignment_and_audio`](#emphasesfrom_alignment_and_audio)
        * [`emphases.from_text_and_audio`](#emphasesfrom_text_and_audio)
        * [`emphases.from_file`](#emphasesfrom_file)
        * [`emphases.from_file_to_file`](#emphasesfrom_file_to_file)
        * [`emphases.from_files_to_files`](#emphasesfrom_files_to_files)
    * [Command-line interface](#command-line-interface)
- [Training](#training)
    * [Download](#download)
    * [Annotate](#annotate)
    * [Preprocess](#preprocess)
    * [Partition](#partition)
    * [Train](#train)
    * [Monitor](#monitor)
- [Evaluation](#reproducing-results)
    * [Evaluate](#evaluate)
    * [Analyze](#analyze)
- [Citation](#citation)


## Installation

`pip install emphases`

By default, we use the Penn Phonetic Forced Aligner (P2FA) via the [`pyfoal`](https://github.com/maxrmorrison/pyfoal/)
repo to perform word alignments. This requires installing HTK. See [the HTK
installation instructions](https://github.com/maxrmorrison/pyfoal/tree/main?tab=readme-ov-file#penn-phonetic-forced-aligner-p2fa)
provided by `pyfoal`. Alternatively, you can use a different forced aligner
and either pass the alignment as a [`pypar.Alignment`](https://github.com/maxrmorrison/pypar/tree/main)
object or save the alignment as a `.TextGrid` file.


## Inference

Perform automatic emphasis annotation using our best pretrained model

```python
import emphases

# Text and audio of speech
text_file = 'example.txt'
audio_file = 'example.wav'

# Detect emphases
alignment, prominence = emphases.from_file(text_file, audio_file)

# Check which words were emphasized
for word, score in zip(alignment, prominence[0]):
    print(f'{word} has a prominence of {score}')
```

The `alignment` is a [`pypar.Alignment`](https://github.com/maxrmorrison/pypar)
object.


### Application programming interface

#### `emphases.from_alignment_and_audio`

```python
def from_alignment_and_audio(
    alignment: pypar.Alignment,
    audio: torch.Tensor,
    sample_rate: int,
    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,
    batch_size: Optional[int] = None,
    gpu: Optional[int] = None
) -> Tuple[Type[pypar.Alignment], torch.Tensor]:
    """Produce emphasis scores for each word

    Args:
        alignment: The forced phoneme alignment
        audio: The speech waveform
        sample_rate: The audio sampling rate
        checkpoint: The model checkpoint to use for inference
        batch_size: The maximum number of frames per batch
        gpu: The index of the gpu to run inference on

    Returns:
        scores: The float-valued emphasis scores for each word
    """
```


#### `emphases.from_text_and_audio`

```python
def from_text_and_audio(
    text: str,
    audio: torch.Tensor,
    sample_rate: int,
    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,
    batch_size: Optional[int] = None,
    gpu: Optional[int] = None
) -> Tuple[Type[pypar.Alignment], torch.Tensor]:
    """Produce emphasis scores for each word

    Args:
        text: The speech transcript
        audio: The speech waveform
        sample_rate: The audio sampling rate
        checkpoint: The model checkpoint to use for inference
        batch_size: The maximum number of frames per batch
        gpu: The index of the gpu to run inference on

    Returns:
        alignment: The forced phoneme alignment
        scores: The float-valued emphasis scores for each word
    """
```


#### `emphases.from_file`

```python
def from_file(
    text_file: Union[str, bytes, os.PathLike],
    audio_file: Union[str, bytes, os.PathLike],
    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,
    batch_size: Optional[int] = None,
    gpu: Optional[int] = None
) -> Tuple[Type[pypar.Alignment], torch.Tensor]:
    """Produce emphasis scores for each word for files on disk

    Args:
        text_file: The speech transcript (.txt) or alignment (.TextGrid) file
        audio_file: The speech waveform audio file
        checkpoint: The model checkpoint to use for inference
        batch_size: The maximum number of frames per batch
        gpu: The index of the gpu to run inference on

    Returns:
        alignment: The forced phoneme alignment
        scores: The float-valued emphasis scores for each word
    """
```


#### `emphases.from_file_to_file`

```python
def from_file_to_file(
    text_file: List[Union[str, bytes, os.PathLike]],
    audio_file: List[Union[str, bytes, os.PathLike]],
    output_prefix: Optional[List[Union[str, bytes, os.PathLike]]] = None,
    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,
    batch_size: Optional[int] = None,
    gpu: Optional[int] = None
) -> None:
    """Produce emphasis scores for each word for files on disk and save to disk

    Args:
        text_file: The speech transcript (.txt) or alignment (.TextGrid) file
        audio_file: The speech waveform audio file
        output_prefix: The output prefix. Defaults to text file stem.
        checkpoint: The model checkpoint to use for inference
        batch_size: The maximum number of frames per batch
        gpu: The index of the gpu to run inference on
    """
```

Emphases are saved as a list of five-tuples containing the word, start time,
end time, a float-valued emphasis score, and a boolean that is true if the
word is emphasized.


#### `emphases.from_files_to_files`

```python
def from_files_to_files(
    text_files: List[Union[str, bytes, os.PathLike]],
    audio_files: List[Union[str, bytes, os.PathLike]],
    output_prefixes: Optional[List[Union[str, bytes, os.PathLike]]] = None,
    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,
    batch_size: Optional[int] = None,
    gpu: Optional[int] = None
) -> None:
    """Produce emphasis scores for each word for many files and save to disk

    Args:
        text_file: The speech transcript (.txt) or alignment (.TextGrid) files
        audio_files: The corresponding speech audio files
        output_prefixes: The output files. Defaults to text file stems.
        checkpoint: The model checkpoint to use for inference
        batch_size: The maximum number of frames per batch
        gpu: The index of the gpu to run inference on
    """
```


### Command-line interface

```
python -m emphases
    [-h]
    --text_files TEXT_FILES [TEXT_FILES ...]
    --audio_files AUDIO_FILES [AUDIO_FILES ...]
    [--output_files OUTPUT_FILES [OUTPUT_FILES ...]]
    [--checkpoint CHECKPOINT]
    [--batch_size BATCH_SIZE]
    [--gpu GPU]

Determine which words in a speech file are emphasized

options:
  -h, --help            show this help message and exit
  --text_files TEXT_FILES [TEXT_FILES ...]
                        The speech transcript text files
  --audio_files AUDIO_FILES [AUDIO_FILES ...]
                        The corresponding speech audio files
  --output_files OUTPUT_FILES [OUTPUT_FILES ...]
                        The output files. Default is text files with json suffix.
  --checkpoint CHECKPOINT
                        The model checkpoint to use for inference
  --batch_size BATCH_SIZE
                        The maximum number of frames per batch
  --gpu GPU             The index of the gpu to run inference on
```


## Training

### Download data

`python -m emphases.download --datasets <datasets>`.

Downloads and uncompresses datasets.

**N.B.** We omit Buckeye for public release. This evaluation dataset can be
made by [downloading Buckeye](https://buckeyecorpus.osu.edu/) and matching
the files to the
[annotations](https://github.com/ProSD-Lab/Prominence-perception-in-English-French-Spanish/).
The process of matching the files to the annotations was done for us and is
tricky to replicate exactly. However, due to licensing restrictions on
Buckeye, we cannot legally distribute our private, aligned annotations.


### Annotate data

Performing annotation requires first installing
[Reproducible Subjective Evaluation (ReSEval)](https://github.com/reseval/reseval).

`python -m emphases.annotate --datasets <datasets>`

Launches a local web application to perform emphasis annotation, according to
the ReSEval configuration file `emphases/assets/configs/annotate.yaml`.
Requires ReSEval to be installed.

`python -m emphases.annotate --datasets <datasets> --remote --production`

Launches a crowdsourced emphasis annotation task, according to the ReSEval
configuration file `emphases/assets/configs/annotate.yaml`. Requires ReSEval
to be installed.


### Partition data

`python -m emphases.partition`

Generates `train`, `valid`, and `test` partitions for all datasets.
Partitioning is deterministic given the same random seed. You do not need to
run this step, as the original partitions are saved in
`emphases/assets/partitions`.


### Preprocess

`python -m emphases.preprocess`


### Train

`python -m emphases.train --config <config> --dataset <dataset> --gpus <gpus>`

Trains a model according to a given configuration. Uses a list of GPU
indices as an argument, and uses distributed data parallelism (DDP)
if more than one index is given. For example, `--gpus 0 3` will train
using DDP on GPUs `0` and `3`.


## Evaluation

### Evaluate

`python -m emphases.evaluate --config <config> --checkpoint <checkpoint> --gpu <gpu>`


### Monitor

Run `tensorboard --logdir runs/`. If you are running training
remotely, you must create a SSH connection with port forwarding to view
Tensorboard. This can be done with `ssh -L 6006:localhost:6006
<user>@<server-ip-address>`. Then, open `localhost:6006` in your browser.


## Citation

### IEEE
M. Morrison, P. Pawar, N. Pruyne, J. Cole, and B. Pardo, "Crowdsourced and Automatic Speech Prominence Estimation," International Conference on Acoustics, Speech, & Signal Processing, 2024.


### BibTex

```
@inproceedings{morrison2024crowdsourced,
    title={Crowdsourced and Automatic Speech Prominence Estimation},
    author={Morrison, Max and Pawar, Pranav and Pruyne, Nathan and Cole, Jennifer and Pardo, Bryan},
    booktitle={International Conference on Acoustics, Speech, & Signal Processing},
    year={2024}
}

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/interactiveaudiolab/emphases",
    "name": "emphases",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "annotatation, audio, emphasis, prominence, speech",
    "author": "Interactive Audio Lab",
    "author_email": "interactiveaudiolab@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/28/42/59b7099715dedee08703d171c6a3ef0cadaf61e195c59c4d7d06f1ae08f5/emphases-0.0.2.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">Crowdsourced and Automatic Speech Prominence Estimation</h1>\n<div align=\"center\">\n\n[![PyPI](https://img.shields.io/pypi/v/emphases.svg)](https://pypi.python.org/pypi/emphases)\n[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)\n[![Downloads](https://static.pepy.tech/badge/emphases)](https://pepy.tech/project/emphases)\n\nAnnotation, training, evaluation and inference of speech prominence\n\n[Paper](https://www.maxrmorrison.com/pdfs/morrison2024crowdsourced.pdf) [Website](https://www.maxrmorrison.com/sites/prominence-estimation) [Dataset](https://zenodo.org/records/10402793)\n\n</div>\n\n\n## Table of contents\n\n- [Installation](#installation)\n- [Inference](#inference)\n    * [Application programming interface](#application-programming-interface)\n        * [`emphases.from_alignment_and_audio`](#emphasesfrom_alignment_and_audio)\n        * [`emphases.from_text_and_audio`](#emphasesfrom_text_and_audio)\n        * [`emphases.from_file`](#emphasesfrom_file)\n        * [`emphases.from_file_to_file`](#emphasesfrom_file_to_file)\n        * [`emphases.from_files_to_files`](#emphasesfrom_files_to_files)\n    * [Command-line interface](#command-line-interface)\n- [Training](#training)\n    * [Download](#download)\n    * [Annotate](#annotate)\n    * [Preprocess](#preprocess)\n    * [Partition](#partition)\n    * [Train](#train)\n    * [Monitor](#monitor)\n- [Evaluation](#reproducing-results)\n    * [Evaluate](#evaluate)\n    * [Analyze](#analyze)\n- [Citation](#citation)\n\n\n## Installation\n\n`pip install emphases`\n\nBy default, we use the Penn Phonetic Forced Aligner (P2FA) via the [`pyfoal`](https://github.com/maxrmorrison/pyfoal/)\nrepo to perform word alignments. This requires installing HTK. See [the HTK\ninstallation instructions](https://github.com/maxrmorrison/pyfoal/tree/main?tab=readme-ov-file#penn-phonetic-forced-aligner-p2fa)\nprovided by `pyfoal`. Alternatively, you can use a different forced aligner\nand either pass the alignment as a [`pypar.Alignment`](https://github.com/maxrmorrison/pypar/tree/main)\nobject or save the alignment as a `.TextGrid` file.\n\n\n## Inference\n\nPerform automatic emphasis annotation using our best pretrained model\n\n```python\nimport emphases\n\n# Text and audio of speech\ntext_file = 'example.txt'\naudio_file = 'example.wav'\n\n# Detect emphases\nalignment, prominence = emphases.from_file(text_file, audio_file)\n\n# Check which words were emphasized\nfor word, score in zip(alignment, prominence[0]):\n    print(f'{word} has a prominence of {score}')\n```\n\nThe `alignment` is a [`pypar.Alignment`](https://github.com/maxrmorrison/pypar)\nobject.\n\n\n### Application programming interface\n\n#### `emphases.from_alignment_and_audio`\n\n```python\ndef from_alignment_and_audio(\n    alignment: pypar.Alignment,\n    audio: torch.Tensor,\n    sample_rate: int,\n    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,\n    batch_size: Optional[int] = None,\n    gpu: Optional[int] = None\n) -> Tuple[Type[pypar.Alignment], torch.Tensor]:\n    \"\"\"Produce emphasis scores for each word\n\n    Args:\n        alignment: The forced phoneme alignment\n        audio: The speech waveform\n        sample_rate: The audio sampling rate\n        checkpoint: The model checkpoint to use for inference\n        batch_size: The maximum number of frames per batch\n        gpu: The index of the gpu to run inference on\n\n    Returns:\n        scores: The float-valued emphasis scores for each word\n    \"\"\"\n```\n\n\n#### `emphases.from_text_and_audio`\n\n```python\ndef from_text_and_audio(\n    text: str,\n    audio: torch.Tensor,\n    sample_rate: int,\n    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,\n    batch_size: Optional[int] = None,\n    gpu: Optional[int] = None\n) -> Tuple[Type[pypar.Alignment], torch.Tensor]:\n    \"\"\"Produce emphasis scores for each word\n\n    Args:\n        text: The speech transcript\n        audio: The speech waveform\n        sample_rate: The audio sampling rate\n        checkpoint: The model checkpoint to use for inference\n        batch_size: The maximum number of frames per batch\n        gpu: The index of the gpu to run inference on\n\n    Returns:\n        alignment: The forced phoneme alignment\n        scores: The float-valued emphasis scores for each word\n    \"\"\"\n```\n\n\n#### `emphases.from_file`\n\n```python\ndef from_file(\n    text_file: Union[str, bytes, os.PathLike],\n    audio_file: Union[str, bytes, os.PathLike],\n    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,\n    batch_size: Optional[int] = None,\n    gpu: Optional[int] = None\n) -> Tuple[Type[pypar.Alignment], torch.Tensor]:\n    \"\"\"Produce emphasis scores for each word for files on disk\n\n    Args:\n        text_file: The speech transcript (.txt) or alignment (.TextGrid) file\n        audio_file: The speech waveform audio file\n        checkpoint: The model checkpoint to use for inference\n        batch_size: The maximum number of frames per batch\n        gpu: The index of the gpu to run inference on\n\n    Returns:\n        alignment: The forced phoneme alignment\n        scores: The float-valued emphasis scores for each word\n    \"\"\"\n```\n\n\n#### `emphases.from_file_to_file`\n\n```python\ndef from_file_to_file(\n    text_file: List[Union[str, bytes, os.PathLike]],\n    audio_file: List[Union[str, bytes, os.PathLike]],\n    output_prefix: Optional[List[Union[str, bytes, os.PathLike]]] = None,\n    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,\n    batch_size: Optional[int] = None,\n    gpu: Optional[int] = None\n) -> None:\n    \"\"\"Produce emphasis scores for each word for files on disk and save to disk\n\n    Args:\n        text_file: The speech transcript (.txt) or alignment (.TextGrid) file\n        audio_file: The speech waveform audio file\n        output_prefix: The output prefix. Defaults to text file stem.\n        checkpoint: The model checkpoint to use for inference\n        batch_size: The maximum number of frames per batch\n        gpu: The index of the gpu to run inference on\n    \"\"\"\n```\n\nEmphases are saved as a list of five-tuples containing the word, start time,\nend time, a float-valued emphasis score, and a boolean that is true if the\nword is emphasized.\n\n\n#### `emphases.from_files_to_files`\n\n```python\ndef from_files_to_files(\n    text_files: List[Union[str, bytes, os.PathLike]],\n    audio_files: List[Union[str, bytes, os.PathLike]],\n    output_prefixes: Optional[List[Union[str, bytes, os.PathLike]]] = None,\n    checkpoint: Optional[Union[str, bytes, os.PathLike]] = None,\n    batch_size: Optional[int] = None,\n    gpu: Optional[int] = None\n) -> None:\n    \"\"\"Produce emphasis scores for each word for many files and save to disk\n\n    Args:\n        text_file: The speech transcript (.txt) or alignment (.TextGrid) files\n        audio_files: The corresponding speech audio files\n        output_prefixes: The output files. Defaults to text file stems.\n        checkpoint: The model checkpoint to use for inference\n        batch_size: The maximum number of frames per batch\n        gpu: The index of the gpu to run inference on\n    \"\"\"\n```\n\n\n### Command-line interface\n\n```\npython -m emphases\n    [-h]\n    --text_files TEXT_FILES [TEXT_FILES ...]\n    --audio_files AUDIO_FILES [AUDIO_FILES ...]\n    [--output_files OUTPUT_FILES [OUTPUT_FILES ...]]\n    [--checkpoint CHECKPOINT]\n    [--batch_size BATCH_SIZE]\n    [--gpu GPU]\n\nDetermine which words in a speech file are emphasized\n\noptions:\n  -h, --help            show this help message and exit\n  --text_files TEXT_FILES [TEXT_FILES ...]\n                        The speech transcript text files\n  --audio_files AUDIO_FILES [AUDIO_FILES ...]\n                        The corresponding speech audio files\n  --output_files OUTPUT_FILES [OUTPUT_FILES ...]\n                        The output files. Default is text files with json suffix.\n  --checkpoint CHECKPOINT\n                        The model checkpoint to use for inference\n  --batch_size BATCH_SIZE\n                        The maximum number of frames per batch\n  --gpu GPU             The index of the gpu to run inference on\n```\n\n\n## Training\n\n### Download data\n\n`python -m emphases.download --datasets <datasets>`.\n\nDownloads and uncompresses datasets.\n\n**N.B.** We omit Buckeye for public release. This evaluation dataset can be\nmade by [downloading Buckeye](https://buckeyecorpus.osu.edu/) and matching\nthe files to the\n[annotations](https://github.com/ProSD-Lab/Prominence-perception-in-English-French-Spanish/).\nThe process of matching the files to the annotations was done for us and is\ntricky to replicate exactly. However, due to licensing restrictions on\nBuckeye, we cannot legally distribute our private, aligned annotations.\n\n\n### Annotate data\n\nPerforming annotation requires first installing\n[Reproducible Subjective Evaluation (ReSEval)](https://github.com/reseval/reseval).\n\n`python -m emphases.annotate --datasets <datasets>`\n\nLaunches a local web application to perform emphasis annotation, according to\nthe ReSEval configuration file `emphases/assets/configs/annotate.yaml`.\nRequires ReSEval to be installed.\n\n`python -m emphases.annotate --datasets <datasets> --remote --production`\n\nLaunches a crowdsourced emphasis annotation task, according to the ReSEval\nconfiguration file `emphases/assets/configs/annotate.yaml`. Requires ReSEval\nto be installed.\n\n\n### Partition data\n\n`python -m emphases.partition`\n\nGenerates `train`, `valid`, and `test` partitions for all datasets.\nPartitioning is deterministic given the same random seed. You do not need to\nrun this step, as the original partitions are saved in\n`emphases/assets/partitions`.\n\n\n### Preprocess\n\n`python -m emphases.preprocess`\n\n\n### Train\n\n`python -m emphases.train --config <config> --dataset <dataset> --gpus <gpus>`\n\nTrains a model according to a given configuration. Uses a list of GPU\nindices as an argument, and uses distributed data parallelism (DDP)\nif more than one index is given. For example, `--gpus 0 3` will train\nusing DDP on GPUs `0` and `3`.\n\n\n## Evaluation\n\n### Evaluate\n\n`python -m emphases.evaluate --config <config> --checkpoint <checkpoint> --gpu <gpu>`\n\n\n### Monitor\n\nRun `tensorboard --logdir runs/`. If you are running training\nremotely, you must create a SSH connection with port forwarding to view\nTensorboard. This can be done with `ssh -L 6006:localhost:6006\n<user>@<server-ip-address>`. Then, open `localhost:6006` in your browser.\n\n\n## Citation\n\n### IEEE\nM. Morrison, P. Pawar, N. Pruyne, J. Cole, and B. Pardo, \"Crowdsourced and Automatic Speech Prominence Estimation,\" International Conference on Acoustics, Speech, & Signal Processing, 2024.\n\n\n### BibTex\n\n```\n@inproceedings{morrison2024crowdsourced,\n    title={Crowdsourced and Automatic Speech Prominence Estimation},\n    author={Morrison, Max and Pawar, Pranav and Pruyne, Nathan and Cole, Jennifer and Pardo, Bryan},\n    booktitle={International Conference on Acoustics, Speech, & Signal Processing},\n    year={2024}\n}\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Crowdsourced and Automatic Speech Prominence Estimation",
    "version": "0.0.2",
    "project_urls": {
        "Homepage": "https://github.com/interactiveaudiolab/emphases"
    },
    "split_keywords": [
        "annotatation",
        " audio",
        " emphasis",
        " prominence",
        " speech"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "166c0e8fad540564b3b348b12c2411e55df5d5d6aaa7202d30d9510d9016a13e",
                "md5": "d29969f91143e2f52bf0c0622205bc97",
                "sha256": "044ac54b27bf9d372abff2f3e6b8db331df7a8a1397c94af7b17435cc48dbcdb"
            },
            "downloads": -1,
            "filename": "emphases-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d29969f91143e2f52bf0c0622205bc97",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 2794060,
            "upload_time": "2024-04-12T23:47:09",
            "upload_time_iso_8601": "2024-04-12T23:47:09.165241Z",
            "url": "https://files.pythonhosted.org/packages/16/6c/0e8fad540564b3b348b12c2411e55df5d5d6aaa7202d30d9510d9016a13e/emphases-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "284259b7099715dedee08703d171c6a3ef0cadaf61e195c59c4d7d06f1ae08f5",
                "md5": "2eff9521d074ee8e1f3dc3482670666c",
                "sha256": "3580e87af2a2998b18e24ed3f66a31a201d2be4df4d2a521268d92412319337c"
            },
            "downloads": -1,
            "filename": "emphases-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "2eff9521d074ee8e1f3dc3482670666c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 2774247,
            "upload_time": "2024-04-12T23:47:12",
            "upload_time_iso_8601": "2024-04-12T23:47:12.409129Z",
            "url": "https://files.pythonhosted.org/packages/28/42/59b7099715dedee08703d171c6a3ef0cadaf61e195c59c4d7d06f1ae08f5/emphases-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-12 23:47:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "interactiveaudiolab",
    "github_project": "emphases",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "emphases"
}
        
Elapsed time: 0.27048s