aspeak


Nameaspeak JSON
Version 6.0.1 PyPI version JSON
download
home_pagehttps://github.com/kxxt/aspeak
SummaryA simple text-to-speech client for Azure TTS API.
upload_time2023-10-03 01:47:32
maintainer
docs_urlNone
authorkxxt <rsworktech@outlook.com>
requires_python
licenseMIT
keywords speech-synthesis aspeak tts text-to-speech audio
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # :speaking_head: aspeak

[![GitHub stars](https://img.shields.io/github/stars/kxxt/aspeak)](https://github.com/kxxt/aspeak/stargazers)
[![GitHub issues](https://img.shields.io/github/issues/kxxt/aspeak)](https://github.com/kxxt/aspeak/issues)
[![GitHub forks](https://img.shields.io/github/forks/kxxt/aspeak)](https://github.com/kxxt/aspeak/network)
[![GitHub license](https://img.shields.io/github/license/kxxt/aspeak)](https://github.com/kxxt/aspeak/blob/v6/LICENSE)

<a href="https://github.com/kxxt/aspeak/graphs/contributors" alt="Contributors">
    <img src="https://img.shields.io/github/contributors/kxxt/aspeak" />
</a>
<a href="https://github.com/kxxt/aspeak/pulse" alt="Activity">
    <img src="https://img.shields.io/github/commit-activity/m/kxxt/aspeak" />
</a>

A simple text-to-speech client for Azure TTS API. :laughing:

## Note

Starting from version 6.0.0, `aspeak` by default uses the RESTful API of Azure TTS. If you want to use the WebSocket API,
you can specify `--mode websocket` when invoking `aspeak` or set `mode = "websocket"` in the `auth` section of your profile.

Starting from version 4.0.0, `aspeak` is rewritten in rust. The old python version is available at the `python` branch.

You can sign up for an Azure account and then
[choose a payment plan as needed (or stick to free tier)](https://azure.microsoft.com/en-us/pricing/details/cognitive-services/speech-services/).
The free tier includes a quota of 0.5 million characters per month, free of charge.

Please refer to the [Authentication section](#authentication) to learn how to set up authentication for aspeak.

## Installation

### Download from GitHub Releases (Recommended for most users)

Download the latest release from [here](https://github.com/kxxt/aspeak/releases/latest).

After downloading, extract the archive and you will get a binary executable file.

You can put it in a directory that is in your `PATH` environment variable so that you can run it from anywhere.

### Installl from AUR (Recommended for Arch Linux users)

From v4.1.0, You can install `aspeak-bin` from AUR.

### Install from PyPI

Installing from PyPI will also install the python binding of `aspeak` for you. Check [Library Usage#Python](#Python) for more information on using the python binding.

```bash
pip install -U aspeak==6.0.0
```

Now the prebuilt wheels are only available for x86_64 architecture.
Due to some technical issues, I haven't uploaded the source distribution to PyPI yet.
So to build wheel from source, you need to follow the instructions in [Install from Source](#Install-from-Source).

Because of manylinux compatibility issues, the wheels for linux are not available on PyPI. (But you can still build them from source.)

### Install from Source

#### CLI Only

The easiest way to install `aspeak` from source is to use cargo:

```bash
cargo install aspeak -F binary
```

Alternatively, you can also install `aspeak` from AUR.

#### Python Wheel

To build the python wheel, you need to install `maturin` first:

```bash
pip install maturin
```

After cloning the repository and `cd` into the directory
, you can build the wheel by running:

```bash
maturin build --release --strip -F python --bindings pyo3 --interpreter python --manifest-path Cargo.toml --out dist-pyo3
maturin build --release --strip --bindings bin -F binary --interpreter python --manifest-path Cargo.toml --out dist-bin
bash merge-wheel.bash
```

If everything goes well, you will get a wheel file in the `dist` directory.

## Usage

Run `aspeak help` to see the help message.

Run `aspeak help <subcommand>` to see the help message of a subcommand.

### Authentication

The authentication options should be placed before any subcommand.

For example, to utilize your subscription key and
an official endpoint designated by a region,
run the following command:

```sh
$ aspeak --region <YOUR_REGION> --key <YOUR_SUBSCRIPTION_KEY> text "Hello World"
```

If you are using a custom endpoint, you can use the `--endpoint` option instead of `--region`.

To avoid repetition, you can store your authentication details
in your aspeak profile.
Read the following section for more details.

From v5.2.0, you can also set the authentication secrets via the following environment variables:

- `ASPEAK_AUTH_KEY` for authentication using subscription key
- `ASPEAK_AUTH_TOKEN` for authentication using authorization token

From v4.3.0, you can let aspeak use a proxy server to connect to the endpoint.
For now, only http and socks5 proxies are supported (no https support yet). For example:

```sh
$ aspeak --proxy http://your_proxy_server:port text "Hello World"
$ aspeak --proxy socks5://your_proxy_server:port text "Hello World"
```

aspeak also respects the `HTTP_PROXY`(or `http_proxy`) environment variable.

### Configuration

aspeak v4 introduces the concept of profiles.
A profile is a configuration file where you can specify default values for the command line options.

Run the following command to create your default profile:

```sh
$ aspeak config init
```

To edit the profile, run:

```sh
$ aspeak config edit
```

If you have trouble running the above command, you can edit the profile manually:

Fist get the path of the profile by running:

```sh
$ aspeak config where
```

Then edit the file with your favorite text editor.

The profile is a TOML file. The default profile looks like this:

Check the comments in the config file for more information about available options.

```toml
# Profile for aspeak
# GitHub: https://github.com/kxxt/aspeak

# Output verbosity
# 0   - Default
# 1   - Verbose
# The following output verbosity levels are only supported on debug build
# 2   - Debug
# >=3 - Trace
verbosity = 0

#
# Authentication configuration
#

[auth]
# Endpoint for TTS
# endpoint = "wss://eastus.tts.speech.microsoft.com/cognitiveservices/websocket/v1"

# Alternatively, you can specify the region if you are using official endpoints
# region = "eastus"

# Synthesizer Mode, "rest" or "websocket"
# mode = "rest"

# Azure Subscription Key
# key = "YOUR_KEY"

# Authentication Token
# token = "Your Authentication Token"

# Extra http headers (for experts)
# headers = [["X-My-Header", "My-Value"], ["X-My-Header2", "My-Value2"]]

# Proxy
# proxy = "socks5://127.0.0.1:7890"

# Voice list API url
# voice_list_api = "Custom voice list API url"

#
# Configuration for text subcommand
#

[text]
# Voice to use. Note that it takes precedence over the locale
# voice = "en-US-JennyNeural"
# Locale to use
locale = "en-US"
# Rate
# rate = 0
# Pitch
# pitch = 0
# Role
# role = "Boy"
# Style, "general" by default
# style = "general"
# Style degree, a floating-point number between 0.1 and 2.0
# style_degree = 1.0

#
# Output Configuration
#

[output]
# Container Format, Only wav/mp3/ogg/webm is supported.
container = "wav"
# Audio Quality. Run `aspeak list-qualities` to see available qualities.
#
# If you choose a container format that does not support the quality level you specified here, 
# we will automatically select the closest level for you.
quality = 0
# Audio Format(for experts). Run `aspeak list-formats` to see available formats.
# Note that it takes precedence over container and quality!
# format = "audio-16khz-128kbitrate-mono-mp3"
```

If you want to use a profile other than your default profile, you can use the `--profile` argument:

```sh
aspeak --profile <PATH_TO_A_PROFILE> text "Hello"
```

If you want to temporarily disable the profile, you can use the `--no-profile` argument:

```sh
aspeak --no-profile --region eastus --key <YOUR_KEY> text "Hello"
```

### Pitch and Rate

- `rate`: The speaking rate of the voice.
  - If you use a float value (say `0.5`), the value will be multiplied by 100% and become `50.00%`.
  - You can use the following values as well: `x-slow`, `slow`, `medium`, `fast`, `x-fast`, `default`.
  - You can also use percentage values directly: `+10%`.
  - You can also use a relative float value (with `f` postfix), `1.2f`:
    - According to the [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-prosody),
    - A relative value, expressed as a number that acts as a multiplier of the default.
    - For example, a value of `1f` results in no change in the rate. A value of `0.5f` results in a halving of the rate. A value of `3f` results in a tripling of the rate.
- `pitch`: The pitch of the voice.
  - If you use a float value (say `-0.5`), the value will be multiplied by 100% and become `-50.00%`.
  - You can also use the following values as well: `x-low`, `low`, `medium`, `high`, `x-high`, `default`.
  - You can also use percentage values directly: `+10%`.
  - You can also use a relative value, (e.g. `-2st` or `+80Hz`):
    - According to the [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-prosody),
    - A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch.
    - The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.
  - You can also use an absolute value: e.g. `600Hz`

**Note**: Unreasonable high/low values will be clipped to reasonable values by Azure Cognitive Services.

### Examples

The following examples assume that you have already set up authentication in your profile.

#### Speak "Hello, world!" to default speaker.

```sh
$ aspeak text "Hello, world"
```

#### SSML to Speech

```sh
$ aspeak ssml << EOF
<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='en-US'><voice name='en-US-JennyNeural'>Hello, world!</voice></speak>
EOF
```

#### List all available voices.

```sh
$ aspeak list-voices
```

#### List all available voices for Chinese.

```sh
$ aspeak list-voices -l zh-CN
```

#### Get information about a voice.

```sh
$ aspeak list-voices -v en-US-SaraNeural
```

<details>

<summary>
    Output
</summary>

```
Microsoft Server Speech Text to Speech Voice (en-US, SaraNeural)
Display name: Sara
Local name: Sara @ en-US
Locale: English (United States)
Gender: Female
ID: en-US-SaraNeural
Voice type: Neural
Status: GA
Sample rate: 48000Hz
Words per minute: 157
Styles: ["angry", "cheerful", "excited", "friendly", "hopeful", "sad", "shouting", "terrified", "unfriendly", "whispering"]
```

</details>

#### Save synthesized speech to a file.

```sh
$ aspeak text "Hello, world" -o output.wav
```

If you prefer mp3/ogg/webm, you can use `-c mp3`/`-c ogg`/`-c webm` option.

```sh
$ aspeak text "Hello, world" -o output.mp3 -c mp3
$ aspeak text "Hello, world" -o output.ogg -c ogg
$ aspeak text "Hello, world" -o output.webm -c webm
```

#### List available quality levels

```sh
$ aspeak list-qualities
```

<details>

<summary>Output</summary>

```
Qualities for MP3:
  3: audio-48khz-192kbitrate-mono-mp3
  2: audio-48khz-96kbitrate-mono-mp3
 -3: audio-16khz-64kbitrate-mono-mp3
  1: audio-24khz-160kbitrate-mono-mp3
 -2: audio-16khz-128kbitrate-mono-mp3
 -4: audio-16khz-32kbitrate-mono-mp3
 -1: audio-24khz-48kbitrate-mono-mp3
  0: audio-24khz-96kbitrate-mono-mp3

Qualities for WAV:
 -2: riff-8khz-16bit-mono-pcm
  1: riff-24khz-16bit-mono-pcm
  0: riff-24khz-16bit-mono-pcm
 -1: riff-16khz-16bit-mono-pcm

Qualities for OGG:
  0: ogg-24khz-16bit-mono-opus
 -1: ogg-16khz-16bit-mono-opus
  1: ogg-48khz-16bit-mono-opus

Qualities for WEBM:
  0: webm-24khz-16bit-mono-opus
 -1: webm-16khz-16bit-mono-opus
  1: webm-24khz-16bit-24kbps-mono-opus
```

</details>

#### List available audio formats (For expert users)

```sh
$ aspeak list-formats
```

<details>

<summary>Output</summary>

```
amr-wb-16000hz
audio-16khz-128kbitrate-mono-mp3
audio-16khz-16bit-32kbps-mono-opus
audio-16khz-32kbitrate-mono-mp3
audio-16khz-64kbitrate-mono-mp3
audio-24khz-160kbitrate-mono-mp3
audio-24khz-16bit-24kbps-mono-opus
audio-24khz-16bit-48kbps-mono-opus
audio-24khz-48kbitrate-mono-mp3
audio-24khz-96kbitrate-mono-mp3
audio-48khz-192kbitrate-mono-mp3
audio-48khz-96kbitrate-mono-mp3
ogg-16khz-16bit-mono-opus
ogg-24khz-16bit-mono-opus
ogg-48khz-16bit-mono-opus
raw-16khz-16bit-mono-pcm
raw-16khz-16bit-mono-truesilk
raw-22050hz-16bit-mono-pcm
raw-24khz-16bit-mono-pcm
raw-24khz-16bit-mono-truesilk
raw-44100hz-16bit-mono-pcm
raw-48khz-16bit-mono-pcm
raw-8khz-16bit-mono-pcm
raw-8khz-8bit-mono-alaw
raw-8khz-8bit-mono-mulaw
riff-16khz-16bit-mono-pcm
riff-22050hz-16bit-mono-pcm
riff-24khz-16bit-mono-pcm
riff-44100hz-16bit-mono-pcm
riff-48khz-16bit-mono-pcm
riff-8khz-16bit-mono-pcm
riff-8khz-8bit-mono-alaw
riff-8khz-8bit-mono-mulaw
webm-16khz-16bit-mono-opus
webm-24khz-16bit-24kbps-mono-opus
webm-24khz-16bit-mono-opus
```

</details>

#### Increase/Decrease audio qualities

```sh
# Less than default quality.
$ aspeak text "Hello, world" -o output.mp3 -c mp3 -q=-1
# Best quality for mp3
$ aspeak text "Hello, world" -o output.mp3 -c mp3 -q=3
```

#### Read text from file and speak it.

```sh
$ cat input.txt | aspeak text
```

or

```sh
$ aspeak text -f input.txt
```

with custom encoding:

```sh
$ aspeak text -f input.txt -e gbk
```

#### Read from stdin and speak it.

```sh
$ aspeak text
```

maybe you prefer:

```sh
$ aspeak text -l zh-CN << EOF
我能吞下玻璃而不伤身体。
EOF
```

#### Speak Chinese.

```sh
$ aspeak text "你好,世界!" -l zh-CN
```

#### Use a custom voice.

```sh
$ aspeak text "你好,世界!" -v zh-CN-YunjianNeural
```

#### Custom pitch, rate and style

```sh
$ aspeak text "你好,世界!" -v zh-CN-XiaoxiaoNeural -p 1.5 -r 0.5 -S sad
$ aspeak text "你好,世界!" -v zh-CN-XiaoxiaoNeural -p=-10% -r=+5% -S cheerful
$ aspeak text "你好,世界!" -v zh-CN-XiaoxiaoNeural -p=+40Hz -r=1.2f -S fearful
$ aspeak text "你好,世界!" -v zh-CN-XiaoxiaoNeural -p=high -r=x-slow -S calm
$ aspeak text "你好,世界!" -v zh-CN-XiaoxiaoNeural -p=+1st -r=-7% -S lyrical
```

### Advanced Usage

#### Use a custom audio format for output

**Note**: Some audio formats are not supported when outputting to speaker.

```sh
$ aspeak text "Hello World" -F riff-48khz-16bit-mono-pcm -o high-quality.wav
```

## Library Usage

### Python

The new version of `aspeak` is written in Rust, and the Python binding is provided by PyO3.

Here is a simple example:

```python
from aspeak import SpeechService

service =  SpeechService(region="eastus", key="YOUR_AZURE_SUBSCRIPTION_KEY")
service.speak_text("Hello, world")
```

First you need to create a `SpeechService` instance.

When creating a `SpeechService` instance, you can specify the following parameters:

- `audio_format`(Positional argument): The audio format of the output audio. Default is `AudioFormat.Riff24KHz16BitMonoPcm`.
  - You can get an audio format by providing a container format and a quality level: `AudioFormat("mp3", 2)`.
- `endpoint`: The endpoint of the speech service.
- `region`: Alternatively, you can specify the region of the speech service instead of typing the boring endpoint url.
- `key`: The subscription key of the speech service.
- `token`: The auth token for the speech service. If you provide a token, the subscription key will be ignored.
- `headers`: Additional HTTP headers for the speech service.
- `mode`: Choose the synthesizer to use. Either `rest` or `websocket`.
  - In websocket mode, the synthesizer will connect to the endpoint when the `SpeechService` instance is created.

After that, you can call `speak_text()` to speak the text or `speak_ssml()` to speak the SSML.
Or you can call `synthesize_text()` or `synthesize_ssml()` to get the audio data.

For `synthesize_text()` and `synthesize_ssml()`, if you provide an `output`, the audio data will be written to that file and the function will return `None`. Otherwise, the function will return the audio data.

Here are the common options for `speak_text()` and `synthesize_text()`:

- `locale`: The locale of the voice. Default is `en-US`.
- `voice`: The voice name. Default is `en-US-JennyNeural`.
- `rate`: The speaking rate of the voice. It must be a string that fits the requirements as documented in this section: [Pitch and Rate](#pitch-and-rate)
- `pitch`: The pitch of the voice. It must be a string that fits the requirements as documented in this section: [Pitch and Rate](#pitch-and-rate)
- `style`: The style of the voice.
  - You can get a list of available styles for a specific voice by executing `aspeak -L -v <VOICE_ID>`
  - The default value is `general`.
- `style_degree`: The degree of the style.
  - According to the
    [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles)
    , style degree specifies the intensity of the speaking style.
    It is a floating point number between 0.01 and 2, inclusive.
  - At the time of writing, style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
- `role`: The role of the voice.
  - According to the
    [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles)
    , `role` specifies the speaking role-play. The voice acts as a different age and gender, but the voice name isn't
    changed.
  - At the time of writing, role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
    `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural`, and `zh-CN-YunyeNeural`.

### Rust

Add `aspeak` to your `Cargo.toml`:

```bash
$ cargo add aspeak
```

Then follow the [documentation](https://docs.rs/aspeak) of `aspeak` crate.

There are 4 examples for quick reference:

- [Simple usage of RestSynthesizer](https://github.com/kxxt/aspeak/blob/v6/examples/03-rest-synthesizer-simple.rs)
- [Simple usage of WebsocketSynthesizer](https://github.com/kxxt/aspeak/blob/v6/examples/04-websocket-synthesizer-simple.rs)
- [Synthesize all txt files in a given directory](https://github.com/kxxt/aspeak/blob/v6/examples/01-synthesize-txt-files.rs)
- [Read-Synthesize-Speak-Loop: Read text from stdin line by line and speak it](https://github.com/kxxt/aspeak/blob/v6/examples/02-rssl.rs)


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kxxt/aspeak",
    "name": "aspeak",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "speech-synthesis,aspeak,tts,text-to-speech,audio",
    "author": "kxxt <rsworktech@outlook.com>",
    "author_email": "kxxt <rsworktech@outlook.com>",
    "download_url": "",
    "platform": null,
    "description": "# :speaking_head: aspeak\n\n[![GitHub stars](https://img.shields.io/github/stars/kxxt/aspeak)](https://github.com/kxxt/aspeak/stargazers)\n[![GitHub issues](https://img.shields.io/github/issues/kxxt/aspeak)](https://github.com/kxxt/aspeak/issues)\n[![GitHub forks](https://img.shields.io/github/forks/kxxt/aspeak)](https://github.com/kxxt/aspeak/network)\n[![GitHub license](https://img.shields.io/github/license/kxxt/aspeak)](https://github.com/kxxt/aspeak/blob/v6/LICENSE)\n\n<a href=\"https://github.com/kxxt/aspeak/graphs/contributors\" alt=\"Contributors\">\n    <img src=\"https://img.shields.io/github/contributors/kxxt/aspeak\" />\n</a>\n<a href=\"https://github.com/kxxt/aspeak/pulse\" alt=\"Activity\">\n    <img src=\"https://img.shields.io/github/commit-activity/m/kxxt/aspeak\" />\n</a>\n\nA simple text-to-speech client for Azure TTS API. :laughing:\n\n## Note\n\nStarting from version 6.0.0, `aspeak` by default uses the RESTful API of Azure TTS. If you want to use the WebSocket API,\nyou can specify `--mode websocket` when invoking `aspeak` or set `mode = \"websocket\"` in the `auth` section of your profile.\n\nStarting from version 4.0.0, `aspeak` is rewritten in rust. The old python version is available at the `python` branch.\n\nYou can sign up for an Azure account and then\n[choose a payment plan as needed (or stick to free tier)](https://azure.microsoft.com/en-us/pricing/details/cognitive-services/speech-services/).\nThe free tier includes a quota of 0.5 million characters per month, free of charge.\n\nPlease refer to the [Authentication section](#authentication) to learn how to set up authentication for aspeak.\n\n## Installation\n\n### Download from GitHub Releases (Recommended for most users)\n\nDownload the latest release from [here](https://github.com/kxxt/aspeak/releases/latest).\n\nAfter downloading, extract the archive and you will get a binary executable file.\n\nYou can put it in a directory that is in your `PATH` environment variable so that you can run it from anywhere.\n\n### Installl from AUR (Recommended for Arch Linux users)\n\nFrom v4.1.0, You can install `aspeak-bin` from AUR.\n\n### Install from PyPI\n\nInstalling from PyPI will also install the python binding of `aspeak` for you. Check [Library Usage#Python](#Python) for more information on using the python binding.\n\n```bash\npip install -U aspeak==6.0.0\n```\n\nNow the prebuilt wheels are only available for x86_64 architecture.\nDue to some technical issues, I haven't uploaded the source distribution to PyPI yet.\nSo to build wheel from source, you need to follow the instructions in [Install from Source](#Install-from-Source).\n\nBecause of manylinux compatibility issues, the wheels for linux are not available on PyPI. (But you can still build them from source.)\n\n### Install from Source\n\n#### CLI Only\n\nThe easiest way to install `aspeak` from source is to use cargo:\n\n```bash\ncargo install aspeak -F binary\n```\n\nAlternatively, you can also install `aspeak` from AUR.\n\n#### Python Wheel\n\nTo build the python wheel, you need to install `maturin` first:\n\n```bash\npip install maturin\n```\n\nAfter cloning the repository and `cd` into the directory\n, you can build the wheel by running:\n\n```bash\nmaturin build --release --strip -F python --bindings pyo3 --interpreter python --manifest-path Cargo.toml --out dist-pyo3\nmaturin build --release --strip --bindings bin -F binary --interpreter python --manifest-path Cargo.toml --out dist-bin\nbash merge-wheel.bash\n```\n\nIf everything goes well, you will get a wheel file in the `dist` directory.\n\n## Usage\n\nRun `aspeak help` to see the help message.\n\nRun `aspeak help <subcommand>` to see the help message of a subcommand.\n\n### Authentication\n\nThe authentication options should be placed before any subcommand.\n\nFor example, to utilize your subscription key and\nan official endpoint designated by a region,\nrun the following command:\n\n```sh\n$ aspeak --region <YOUR_REGION> --key <YOUR_SUBSCRIPTION_KEY> text \"Hello World\"\n```\n\nIf you are using a custom endpoint, you can use the `--endpoint` option instead of `--region`.\n\nTo avoid repetition, you can store your authentication details\nin your aspeak profile.\nRead the following section for more details.\n\nFrom v5.2.0, you can also set the authentication secrets via the following environment variables:\n\n- `ASPEAK_AUTH_KEY` for authentication using subscription key\n- `ASPEAK_AUTH_TOKEN` for authentication using authorization token\n\nFrom v4.3.0, you can let aspeak use a proxy server to connect to the endpoint.\nFor now, only http and socks5 proxies are supported (no https support yet). For example:\n\n```sh\n$ aspeak --proxy http://your_proxy_server:port text \"Hello World\"\n$ aspeak --proxy socks5://your_proxy_server:port text \"Hello World\"\n```\n\naspeak also respects the `HTTP_PROXY`(or `http_proxy`) environment variable.\n\n### Configuration\n\naspeak v4 introduces the concept of profiles.\nA profile is a configuration file where you can specify default values for the command line options.\n\nRun the following command to create your default profile:\n\n```sh\n$ aspeak config init\n```\n\nTo edit the profile, run:\n\n```sh\n$ aspeak config edit\n```\n\nIf you have trouble running the above command, you can edit the profile manually:\n\nFist get the path of the profile by running:\n\n```sh\n$ aspeak config where\n```\n\nThen edit the file with your favorite text editor.\n\nThe profile is a TOML file. The default profile looks like this:\n\nCheck the comments in the config file for more information about available options.\n\n```toml\n# Profile for aspeak\n# GitHub: https://github.com/kxxt/aspeak\n\n# Output verbosity\n# 0   - Default\n# 1   - Verbose\n# The following output verbosity levels are only supported on debug build\n# 2   - Debug\n# >=3 - Trace\nverbosity = 0\n\n#\n# Authentication configuration\n#\n\n[auth]\n# Endpoint for TTS\n# endpoint = \"wss://eastus.tts.speech.microsoft.com/cognitiveservices/websocket/v1\"\n\n# Alternatively, you can specify the region if you are using official endpoints\n# region = \"eastus\"\n\n# Synthesizer Mode, \"rest\" or \"websocket\"\n# mode = \"rest\"\n\n# Azure Subscription Key\n# key = \"YOUR_KEY\"\n\n# Authentication Token\n# token = \"Your Authentication Token\"\n\n# Extra http headers (for experts)\n# headers = [[\"X-My-Header\", \"My-Value\"], [\"X-My-Header2\", \"My-Value2\"]]\n\n# Proxy\n# proxy = \"socks5://127.0.0.1:7890\"\n\n# Voice list API url\n# voice_list_api = \"Custom voice list API url\"\n\n#\n# Configuration for text subcommand\n#\n\n[text]\n# Voice to use. Note that it takes precedence over the locale\n# voice = \"en-US-JennyNeural\"\n# Locale to use\nlocale = \"en-US\"\n# Rate\n# rate = 0\n# Pitch\n# pitch = 0\n# Role\n# role = \"Boy\"\n# Style, \"general\" by default\n# style = \"general\"\n# Style degree, a floating-point number between 0.1 and 2.0\n# style_degree = 1.0\n\n#\n# Output Configuration\n#\n\n[output]\n# Container Format, Only wav/mp3/ogg/webm is supported.\ncontainer = \"wav\"\n# Audio Quality. Run `aspeak list-qualities` to see available qualities.\n#\n# If you choose a container format that does not support the quality level you specified here, \n# we will automatically select the closest level for you.\nquality = 0\n# Audio Format(for experts). Run `aspeak list-formats` to see available formats.\n# Note that it takes precedence over container and quality!\n# format = \"audio-16khz-128kbitrate-mono-mp3\"\n```\n\nIf you want to use a profile other than your default profile, you can use the `--profile` argument:\n\n```sh\naspeak --profile <PATH_TO_A_PROFILE> text \"Hello\"\n```\n\nIf you want to temporarily disable the profile, you can use the `--no-profile` argument:\n\n```sh\naspeak --no-profile --region eastus --key <YOUR_KEY> text \"Hello\"\n```\n\n### Pitch and Rate\n\n- `rate`: The speaking rate of the voice.\n  - If you use a float value (say `0.5`), the value will be multiplied by 100% and become `50.00%`.\n  - You can use the following values as well: `x-slow`, `slow`, `medium`, `fast`, `x-fast`, `default`.\n  - You can also use percentage values directly: `+10%`.\n  - You can also use a relative float value (with `f` postfix), `1.2f`:\n    - According to the [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-prosody),\n    - A relative value, expressed as a number that acts as a multiplier of the default.\n    - For example, a value of `1f` results in no change in the rate. A value of `0.5f` results in a halving of the rate. A value of `3f` results in a tripling of the rate.\n- `pitch`: The pitch of the voice.\n  - If you use a float value (say `-0.5`), the value will be multiplied by 100% and become `-50.00%`.\n  - You can also use the following values as well: `x-low`, `low`, `medium`, `high`, `x-high`, `default`.\n  - You can also use percentage values directly: `+10%`.\n  - You can also use a relative value, (e.g. `-2st` or `+80Hz`):\n    - According to the [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-prosody),\n    - A relative value, expressed as a number preceded by \"+\" or \"-\" and followed by \"Hz\" or \"st\" that specifies an amount to change the pitch.\n    - The \"st\" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.\n  - You can also use an absolute value: e.g. `600Hz`\n\n**Note**: Unreasonable high/low values will be clipped to reasonable values by Azure Cognitive Services.\n\n### Examples\n\nThe following examples assume that you have already set up authentication in your profile.\n\n#### Speak \"Hello, world!\" to default speaker.\n\n```sh\n$ aspeak text \"Hello, world\"\n```\n\n#### SSML to Speech\n\n```sh\n$ aspeak ssml << EOF\n<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='en-US'><voice name='en-US-JennyNeural'>Hello, world!</voice></speak>\nEOF\n```\n\n#### List all available voices.\n\n```sh\n$ aspeak list-voices\n```\n\n#### List all available voices for Chinese.\n\n```sh\n$ aspeak list-voices -l zh-CN\n```\n\n#### Get information about a voice.\n\n```sh\n$ aspeak list-voices -v en-US-SaraNeural\n```\n\n<details>\n\n<summary>\n    Output\n</summary>\n\n```\nMicrosoft Server Speech Text to Speech Voice (en-US, SaraNeural)\nDisplay name: Sara\nLocal name: Sara @ en-US\nLocale: English (United States)\nGender: Female\nID: en-US-SaraNeural\nVoice type: Neural\nStatus: GA\nSample rate: 48000Hz\nWords per minute: 157\nStyles: [\"angry\", \"cheerful\", \"excited\", \"friendly\", \"hopeful\", \"sad\", \"shouting\", \"terrified\", \"unfriendly\", \"whispering\"]\n```\n\n</details>\n\n#### Save synthesized speech to a file.\n\n```sh\n$ aspeak text \"Hello, world\" -o output.wav\n```\n\nIf you prefer mp3/ogg/webm, you can use `-c mp3`/`-c ogg`/`-c webm` option.\n\n```sh\n$ aspeak text \"Hello, world\" -o output.mp3 -c mp3\n$ aspeak text \"Hello, world\" -o output.ogg -c ogg\n$ aspeak text \"Hello, world\" -o output.webm -c webm\n```\n\n#### List available quality levels\n\n```sh\n$ aspeak list-qualities\n```\n\n<details>\n\n<summary>Output</summary>\n\n```\nQualities for MP3:\n  3: audio-48khz-192kbitrate-mono-mp3\n  2: audio-48khz-96kbitrate-mono-mp3\n -3: audio-16khz-64kbitrate-mono-mp3\n  1: audio-24khz-160kbitrate-mono-mp3\n -2: audio-16khz-128kbitrate-mono-mp3\n -4: audio-16khz-32kbitrate-mono-mp3\n -1: audio-24khz-48kbitrate-mono-mp3\n  0: audio-24khz-96kbitrate-mono-mp3\n\nQualities for WAV:\n -2: riff-8khz-16bit-mono-pcm\n  1: riff-24khz-16bit-mono-pcm\n  0: riff-24khz-16bit-mono-pcm\n -1: riff-16khz-16bit-mono-pcm\n\nQualities for OGG:\n  0: ogg-24khz-16bit-mono-opus\n -1: ogg-16khz-16bit-mono-opus\n  1: ogg-48khz-16bit-mono-opus\n\nQualities for WEBM:\n  0: webm-24khz-16bit-mono-opus\n -1: webm-16khz-16bit-mono-opus\n  1: webm-24khz-16bit-24kbps-mono-opus\n```\n\n</details>\n\n#### List available audio formats (For expert users)\n\n```sh\n$ aspeak list-formats\n```\n\n<details>\n\n<summary>Output</summary>\n\n```\namr-wb-16000hz\naudio-16khz-128kbitrate-mono-mp3\naudio-16khz-16bit-32kbps-mono-opus\naudio-16khz-32kbitrate-mono-mp3\naudio-16khz-64kbitrate-mono-mp3\naudio-24khz-160kbitrate-mono-mp3\naudio-24khz-16bit-24kbps-mono-opus\naudio-24khz-16bit-48kbps-mono-opus\naudio-24khz-48kbitrate-mono-mp3\naudio-24khz-96kbitrate-mono-mp3\naudio-48khz-192kbitrate-mono-mp3\naudio-48khz-96kbitrate-mono-mp3\nogg-16khz-16bit-mono-opus\nogg-24khz-16bit-mono-opus\nogg-48khz-16bit-mono-opus\nraw-16khz-16bit-mono-pcm\nraw-16khz-16bit-mono-truesilk\nraw-22050hz-16bit-mono-pcm\nraw-24khz-16bit-mono-pcm\nraw-24khz-16bit-mono-truesilk\nraw-44100hz-16bit-mono-pcm\nraw-48khz-16bit-mono-pcm\nraw-8khz-16bit-mono-pcm\nraw-8khz-8bit-mono-alaw\nraw-8khz-8bit-mono-mulaw\nriff-16khz-16bit-mono-pcm\nriff-22050hz-16bit-mono-pcm\nriff-24khz-16bit-mono-pcm\nriff-44100hz-16bit-mono-pcm\nriff-48khz-16bit-mono-pcm\nriff-8khz-16bit-mono-pcm\nriff-8khz-8bit-mono-alaw\nriff-8khz-8bit-mono-mulaw\nwebm-16khz-16bit-mono-opus\nwebm-24khz-16bit-24kbps-mono-opus\nwebm-24khz-16bit-mono-opus\n```\n\n</details>\n\n#### Increase/Decrease audio qualities\n\n```sh\n# Less than default quality.\n$ aspeak text \"Hello, world\" -o output.mp3 -c mp3 -q=-1\n# Best quality for mp3\n$ aspeak text \"Hello, world\" -o output.mp3 -c mp3 -q=3\n```\n\n#### Read text from file and speak it.\n\n```sh\n$ cat input.txt | aspeak text\n```\n\nor\n\n```sh\n$ aspeak text -f input.txt\n```\n\nwith custom encoding:\n\n```sh\n$ aspeak text -f input.txt -e gbk\n```\n\n#### Read from stdin and speak it.\n\n```sh\n$ aspeak text\n```\n\nmaybe you prefer:\n\n```sh\n$ aspeak text -l zh-CN << EOF\n\u6211\u80fd\u541e\u4e0b\u73bb\u7483\u800c\u4e0d\u4f24\u8eab\u4f53\u3002\nEOF\n```\n\n#### Speak Chinese.\n\n```sh\n$ aspeak text \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\" -l zh-CN\n```\n\n#### Use a custom voice.\n\n```sh\n$ aspeak text \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\" -v zh-CN-YunjianNeural\n```\n\n#### Custom pitch, rate and style\n\n```sh\n$ aspeak text \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\" -v zh-CN-XiaoxiaoNeural -p 1.5 -r 0.5 -S sad\n$ aspeak text \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\" -v zh-CN-XiaoxiaoNeural -p=-10% -r=+5% -S cheerful\n$ aspeak text \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\" -v zh-CN-XiaoxiaoNeural -p=+40Hz -r=1.2f -S fearful\n$ aspeak text \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\" -v zh-CN-XiaoxiaoNeural -p=high -r=x-slow -S calm\n$ aspeak text \"\u4f60\u597d\uff0c\u4e16\u754c\uff01\" -v zh-CN-XiaoxiaoNeural -p=+1st -r=-7% -S lyrical\n```\n\n### Advanced Usage\n\n#### Use a custom audio format for output\n\n**Note**: Some audio formats are not supported when outputting to speaker.\n\n```sh\n$ aspeak text \"Hello World\" -F riff-48khz-16bit-mono-pcm -o high-quality.wav\n```\n\n## Library Usage\n\n### Python\n\nThe new version of `aspeak` is written in Rust, and the Python binding is provided by PyO3.\n\nHere is a simple example:\n\n```python\nfrom aspeak import SpeechService\n\nservice =  SpeechService(region=\"eastus\", key=\"YOUR_AZURE_SUBSCRIPTION_KEY\")\nservice.speak_text(\"Hello, world\")\n```\n\nFirst you need to create a `SpeechService` instance.\n\nWhen creating a `SpeechService` instance, you can specify the following parameters:\n\n- `audio_format`(Positional argument): The audio format of the output audio. Default is `AudioFormat.Riff24KHz16BitMonoPcm`.\n  - You can get an audio format by providing a container format and a quality level: `AudioFormat(\"mp3\", 2)`.\n- `endpoint`: The endpoint of the speech service.\n- `region`: Alternatively, you can specify the region of the speech service instead of typing the boring endpoint url.\n- `key`: The subscription key of the speech service.\n- `token`: The auth token for the speech service. If you provide a token, the subscription key will be ignored.\n- `headers`: Additional HTTP headers for the speech service.\n- `mode`: Choose the synthesizer to use. Either `rest` or `websocket`.\n  - In websocket mode, the synthesizer will connect to the endpoint when the `SpeechService` instance is created.\n\nAfter that, you can call `speak_text()` to speak the text or `speak_ssml()` to speak the SSML.\nOr you can call `synthesize_text()` or `synthesize_ssml()` to get the audio data.\n\nFor `synthesize_text()` and `synthesize_ssml()`, if you provide an `output`, the audio data will be written to that file and the function will return `None`. Otherwise, the function will return the audio data.\n\nHere are the common options for `speak_text()` and `synthesize_text()`:\n\n- `locale`: The locale of the voice. Default is `en-US`.\n- `voice`: The voice name. Default is `en-US-JennyNeural`.\n- `rate`: The speaking rate of the voice. It must be a string that fits the requirements as documented in this section: [Pitch and Rate](#pitch-and-rate)\n- `pitch`: The pitch of the voice. It must be a string that fits the requirements as documented in this section: [Pitch and Rate](#pitch-and-rate)\n- `style`: The style of the voice.\n  - You can get a list of available styles for a specific voice by executing `aspeak -L -v <VOICE_ID>`\n  - The default value is `general`.\n- `style_degree`: The degree of the style.\n  - According to the\n    [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles)\n    , style degree specifies the intensity of the speaking style.\n    It is a floating point number between 0.01 and 2, inclusive.\n  - At the time of writing, style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.\n- `role`: The role of the voice.\n  - According to the\n    [Azure documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles)\n    , `role` specifies the speaking role-play. The voice acts as a different age and gender, but the voice name isn't\n    changed.\n  - At the time of writing, role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:\n    `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural`, and `zh-CN-YunyeNeural`.\n\n### Rust\n\nAdd `aspeak` to your `Cargo.toml`:\n\n```bash\n$ cargo add aspeak\n```\n\nThen follow the [documentation](https://docs.rs/aspeak) of `aspeak` crate.\n\nThere are 4 examples for quick reference:\n\n- [Simple usage of RestSynthesizer](https://github.com/kxxt/aspeak/blob/v6/examples/03-rest-synthesizer-simple.rs)\n- [Simple usage of WebsocketSynthesizer](https://github.com/kxxt/aspeak/blob/v6/examples/04-websocket-synthesizer-simple.rs)\n- [Synthesize all txt files in a given directory](https://github.com/kxxt/aspeak/blob/v6/examples/01-synthesize-txt-files.rs)\n- [Read-Synthesize-Speak-Loop: Read text from stdin line by line and speak it](https://github.com/kxxt/aspeak/blob/v6/examples/02-rssl.rs)\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A simple text-to-speech client for Azure TTS API.",
    "version": "6.0.1",
    "project_urls": {
        "Homepage": "https://github.com/kxxt/aspeak",
        "Source Code": "https://github.com/kxxt/aspeak"
    },
    "split_keywords": [
        "speech-synthesis",
        "aspeak",
        "tts",
        "text-to-speech",
        "audio"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dd63c0b4b7bffaa1cab30620cdb188a45830102655c2db41c0eb682731a44db8",
                "md5": "7544b007fcbb53e6b1ab028633a0c494",
                "sha256": "916095b33838aa7d5957381369c1829efb56028ef274ec48bf9452dafa5d5241"
            },
            "downloads": -1,
            "filename": "aspeak-6.0.1-cp38-abi3-macosx_10_7_x86_64.whl",
            "has_sig": false,
            "md5_digest": "7544b007fcbb53e6b1ab028633a0c494",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": null,
            "size": 4205091,
            "upload_time": "2023-10-03T01:47:32",
            "upload_time_iso_8601": "2023-10-03T01:47:32.332941Z",
            "url": "https://files.pythonhosted.org/packages/dd/63/c0b4b7bffaa1cab30620cdb188a45830102655c2db41c0eb682731a44db8/aspeak-6.0.1-cp38-abi3-macosx_10_7_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e6ec06f62a1c8267a01628b597e1367eba1b6a7e4e35a1f427796de7fc1cd134",
                "md5": "5579143a0e653227bdf1853dc7248a50",
                "sha256": "c03ed0d97cad97c78ec7a2a85650b40d3cc4ade4f40d5efe1100fe2978d13dcc"
            },
            "downloads": -1,
            "filename": "aspeak-6.0.1-cp38-abi3-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "5579143a0e653227bdf1853dc7248a50",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": null,
            "size": 4477513,
            "upload_time": "2023-10-03T01:47:34",
            "upload_time_iso_8601": "2023-10-03T01:47:34.222192Z",
            "url": "https://files.pythonhosted.org/packages/e6/ec/06f62a1c8267a01628b597e1367eba1b6a7e4e35a1f427796de7fc1cd134/aspeak-6.0.1-cp38-abi3-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-03 01:47:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kxxt",
    "github_project": "aspeak",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aspeak"
}
        
Elapsed time: 0.11970s