assemblyai


Nameassemblyai JSON
Version 0.37.0 PyPI version JSON
download
home_pagehttps://github.com/AssemblyAI/assemblyai-python-sdk
SummaryAssemblyAI Python SDK
upload_time2025-02-03 10:08:11
maintainerNone
docs_urlNone
authorAssemblyAI
requires_python>=3.8
licenseMIT License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img src="https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/assemblyai.png?raw=true" width="500"/>

---

[![CI Passing](https://github.com/AssemblyAI/assemblyai-python-sdk/actions/workflows/test.yml/badge.svg)](https://github.com/AssemblyAI/assemblyai-python-sdk/actions/workflows/test.yml)
[![GitHub License](https://img.shields.io/github/license/AssemblyAI/assemblyai-python-sdk)](https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/LICENSE)
[![PyPI version](https://badge.fury.io/py/assemblyai.svg)](https://badge.fury.io/py/assemblyai)
[![PyPI Python Versions](https://img.shields.io/pypi/pyversions/assemblyai)](https://pypi.python.org/pypi/assemblyai/)
![PyPI - Wheel](https://img.shields.io/pypi/wheel/assemblyai)
[![AssemblyAI Twitter](https://img.shields.io/twitter/follow/AssemblyAI?label=%40AssemblyAI&style=social)](https://twitter.com/AssemblyAI)
[![AssemblyAI YouTube](https://img.shields.io/youtube/channel/subscribers/UCtatfZMf-8EkIwASXM4ts0A)](https://www.youtube.com/@AssemblyAI)
[![Discord](https://img.shields.io/discord/875120158014853141?logo=discord&label=Discord&link=https%3A%2F%2Fdiscord.com%2Fchannels%2F875120158014853141&style=social)
](https://assemblyai.com/discord)

# AssemblyAI's Python SDK

> _Build with AI models that can transcribe and understand audio_

With a single API call, get access to AI models built on the latest AI breakthroughs to transcribe and understand audio and speech data securely at large scale.

# Overview

- [AssemblyAI's Python SDK](#assemblyais-python-sdk)
- [Overview](#overview)
- [Documentation](#documentation)
- [Quick Start](#quick-start)
  - [Installation](#installation)
  - [Examples](#examples)
    - [**Core Examples**](#core-examples)
    - [**LeMUR Examples**](#lemur-examples)
    - [**Audio Intelligence Examples**](#audio-intelligence-examples)
    - [**Real-Time Examples**](#real-time-examples)
  - [Playgrounds](#playgrounds)
- [Advanced](#advanced)
  - [How the SDK handles Default Configurations](#how-the-sdk-handles-default-configurations)
    - [Defining Defaults](#defining-defaults)
    - [Overriding Defaults](#overriding-defaults)
  - [Synchronous vs Asynchronous](#synchronous-vs-asynchronous)
  - [Polling Intervals](#polling-intervals)
  - [Retrieving Existing Transcripts](#retrieving-existing-transcripts)
    - [Retrieving a Single Transcript](#retrieving-a-single-transcript)
    - [Retrieving Multiple Transcripts as a Group](#retrieving-multiple-transcripts-as-a-group)
    - [Retrieving Transcripts Asynchronously](#retrieving-transcripts-asynchronously)

# Documentation

Visit our [AssemblyAI API Documentation](https://www.assemblyai.com/docs) to get an overview of our models!

# Quick Start

## Installation

```bash
pip install -U assemblyai
```

## Examples

Before starting, you need to set the API key. If you don't have one yet, [**sign up for one**](https://www.assemblyai.com/dashboard/signup)!

```python
import assemblyai as aai

# set the API key
aai.settings.api_key = f"{ASSEMBLYAI_API_KEY}"
```

---

### **Core Examples**

<details>
  <summary>Transcribe a local audio file</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("./my-local-audio-file.wav")

print(transcript.text)
```

</details>

<details>
  <summary>Transcribe an URL</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")

print(transcript.text)
```

</details>

<details>
  <summary>Transcribe binary data</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()

# Binary data is supported directly:
transcript = transcriber.transcribe(data)

# Or: Upload data separately:
upload_url = transcriber.upload_file(data)
transcript = transcriber.transcribe(upload_url)
```

</details>

<details>
  <summary>Export subtitles of an audio file</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")

# in SRT format
print(transcript.export_subtitles_srt())

# in VTT format
print(transcript.export_subtitles_vtt())
```

</details>

<details>
  <summary>List all sentences and paragraphs</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")

sentences = transcript.get_sentences()
for sentence in sentences:
  print(sentence.text)

paragraphs = transcript.get_paragraphs()
for paragraph in paragraphs:
  print(paragraph.text)
```

</details>

<details>
  <summary>Search for words in a transcript</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")

matches = transcript.word_search(["price", "product"])

for match in matches:
  print(f"Found '{match.text}' {match.count} times in the transcript")
```

</details>

<details>
  <summary>Add custom spellings on a transcript</summary>

```python
import assemblyai as aai

config = aai.TranscriptionConfig()
config.set_custom_spelling(
  {
    "Kubernetes": ["k8s"],
    "SQL": ["Sequel"],
  }
)

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3", config)

print(transcript.text)
```

</details>

<details>
  <summary>Upload a file</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
upload_url = transcriber.upload_file(data)
```

</details>

<details>
  <summary>Delete a transcript</summary>

```python
import assemblyai as aai

transcript = aai.Transcriber().transcribe(audio_url)

aai.Transcript.delete_by_id(transcript.id)
```

</details>

<details>
  <summary>List transcripts</summary>

This returns a page of transcripts you created.

```python
import assemblyai as aai

transcriber = aai.Transcriber()

page = transcriber.list_transcripts()
print(page.page_details)  # Page details
print(page.transcripts)  # List of transcripts
```

You can apply filter parameters:

```python
params = aai.ListTranscriptParameters(
    limit=3,
    status=aai.TranscriptStatus.completed,
)
page = transcriber.list_transcripts(params)
```

You can also paginate over all pages by using the helper property `before_id_of_prev_url`.

The `prev_url` always points to a page with older transcripts. If you extract the `before_id`
of the `prev_url` query parameters, you can paginate over all pages from newest to oldest.

```python
transcriber = aai.Transcriber()

params = aai.ListTranscriptParameters()

page = transcriber.list_transcripts(params)
while page.page_details.before_id_of_prev_url is not None:
    params.before_id = page.page_details.before_id_of_prev_url
    page = transcriber.list_transcripts(params)
```

</details>

---

### **LeMUR Examples**

<details>
  <summary>Use LeMUR to summarize an audio file</summary>

```python
import assemblyai as aai

audio_file = "https://assembly.ai/sports_injuries.mp3"

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(audio_file)

prompt = "Provide a brief summary of the transcript."

result = transcript.lemur.task(
    prompt, final_model=aai.LemurModel.claude3_5_sonnet
)

print(result.response)
```

Or use the specialized Summarization endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:

```python
import assemblyai as aai

audio_url = "https://assembly.ai/meeting.mp4"
transcript = aai.Transcriber().transcribe(audio_url)

result = transcript.lemur.summarize(
    final_model=aai.LemurModel.claude3_5_sonnet,
    context="A GitLab meeting to discuss logistics",
    answer_format="TLDR"
)

print(result.response)
```

</details>

<details>
  <summary>Use LeMUR to ask questions about your audio data</summary>

```python
import assemblyai as aai

audio_file = "https://assembly.ai/sports_injuries.mp3"

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(audio_file)

prompt = "What is a runner's knee?"

result = transcript.lemur.task(
    prompt, final_model=aai.LemurModel.claude3_5_sonnet
)

print(result.response)
```

Or use the specialized Q&A endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/customer.mp3")

# ask some questions
questions = [
    aai.LemurQuestion(question="What car was the customer interested in?"),
    aai.LemurQuestion(question="What price range is the customer looking for?"),
]

result = transcript.lemur.question(
  final_model=aai.LemurModel.claude3_5_sonnet,
  questions=questions)

for q in result.response:
    print(f"Question: {q.question}")
    print(f"Answer: {q.answer}")
```

</details>

<details>
  <summary>Use LeMUR with customized input text</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
config = aai.TranscriptionConfig(
  speaker_labels=True,
)
transcript = transcriber.transcribe("https://example.org/customer.mp3", config=config)

# Example converting speaker label utterances into LeMUR input text
text = ""

for utt in transcript.utterances:
    text += f"Speaker {utt.speaker}:\n{utt.text}\n"

result = aai.Lemur().task(
  "You are a helpful coach. Provide an analysis of the transcript "
  "and offer areas to improve with exact quotes. Include no preamble. "
  "Start with an overall summary then get into the examples with feedback.",
  input_text=text,
  final_model=aai.LemurModel.claude3_5_sonnet
)

print(result.response)
```

</details>

<details>
  <summary>Apply LeMUR to multiple transcripts</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript_group = transcriber.transcribe_group(
    [
        "https://example.org/customer1.mp3",
        "https://example.org/customer2.mp3",
    ],
)

result = transcript_group.lemur.task(
  context="These are calls of customers asking for cars. Summarize all calls and create a TLDR.",
  final_model=aai.LemurModel.claude3_5_sonnet
)

print(result.response)
```

</details>

<details>
  <summary>Delete data previously sent to LeMUR</summary>

```python
import assemblyai as aai

# Create a transcript and a corresponding LeMUR request that may contain senstive information.
transcriber = aai.Transcriber()
transcript_group = transcriber.transcribe_group(
  [
    "https://example.org/customer1.mp3",
  ],
)

result = transcript_group.lemur.summarize(
  context="Customers providing sensitive, personally identifiable information",
  answer_format="TLDR"
)

# Get the request ID from the LeMUR response
request_id = result.request_id

# Now we can delete the data about this request
deletion_result = aai.Lemur.purge_request_data(request_id)
print(deletion_result)
```

</details>

---

### **Audio Intelligence Examples**

<details>
  <summary>PII Redact a transcript</summary>

```python
import assemblyai as aai

config = aai.TranscriptionConfig()
config.set_redact_pii(
  # What should be redacted
  policies=[
      aai.PIIRedactionPolicy.credit_card_number,
      aai.PIIRedactionPolicy.email_address,
      aai.PIIRedactionPolicy.location,
      aai.PIIRedactionPolicy.person_name,
      aai.PIIRedactionPolicy.phone_number,
  ],
  # How it should be redacted
  substitution=aai.PIISubstitutionPolicy.hash,
)

transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3", config)
```

To request a copy of the original audio file with the redacted information "beeped" out, set `redact_pii_audio=True` in the config.
Once the `Transcript` object is returned, you can access the URL of the redacted audio file with `get_redacted_audio_url`, or save the redacted audio directly to disk with `save_redacted_audio`.

```python
import assemblyai as aai

transcript = aai.Transcriber().transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(
    redact_pii=True,
    redact_pii_policies=[aai.PIIRedactionPolicy.person_name],
    redact_pii_audio=True
  )
)

redacted_audio_url = transcript.get_redacted_audio_url()
transcript.save_redacted_audio("redacted_audio.mp3")
```

[Read more about PII redaction here.](https://www.assemblyai.com/docs/Models/pii_redaction)

</details>
<details>
  <summary>Summarize the content of a transcript over time</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(auto_chapters=True)
)

for chapter in transcript.chapters:
  print(f"Summary: {chapter.summary}")  # A one paragraph summary of the content spoken during this timeframe
  print(f"Start: {chapter.start}, End: {chapter.end}")  # Timestamps (in milliseconds) of the chapter
  print(f"Healine: {chapter.headline}")  # A single sentence summary of the content spoken during this timeframe
  print(f"Gist: {chapter.gist}")  # An ultra-short summary, just a few words, of the content spoken during this timeframe
```

[Read more about auto chapters here.](https://www.assemblyai.com/docs/Models/auto_chapters)

</details>

<details>
  <summary>Summarize the content of a transcript</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(summarization=True)
)

print(transcript.summary)
```

By default, the summarization model will be `informative` and the summarization type will be `bullets`. [Read more about summarization models and types here](https://www.assemblyai.com/docs/Models/summarization#types-and-models).

To change the model and/or type, pass additional parameters to the `TranscriptionConfig`:

```python
config=aai.TranscriptionConfig(
  summarization=True,
  summary_model=aai.SummarizationModel.catchy,
  summary_type=aai.SummarizationType.headline
)
```

</details>
<details>
  <summary>Detect sensitive content in a transcript</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(content_safety=True)
)


# Get the parts of the transcript which were flagged as sensitive
for result in transcript.content_safety.results:
  print(result.text)  # sensitive text snippet
  print(result.timestamp.start)
  print(result.timestamp.end)

  for label in result.labels:
    print(label.label)  # content safety category
    print(label.confidence) # model's confidence that the text is in this category
    print(label.severity) # severity of the text in relation to the category

# Get the confidence of the most common labels in relation to the entire audio file
for label, confidence in transcript.content_safety.summary.items():
  print(f"{confidence * 100}% confident that the audio contains {label}")

# Get the overall severity of the most common labels in relation to the entire audio file
for label, severity_confidence in transcript.content_safety.severity_score_summary.items():
  print(f"{severity_confidence.low * 100}% confident that the audio contains low-severity {label}")
  print(f"{severity_confidence.medium * 100}% confident that the audio contains mid-severity {label}")
  print(f"{severity_confidence.high * 100}% confident that the audio contains high-severity {label}")

```

[Read more about the content safety categories.](https://www.assemblyai.com/docs/Models/content_moderation#all-labels-supported-by-the-model)

By default, the content safety model will only include labels with a confidence greater than 0.5 (50%). To change this, pass `content_safety_confidence` (as an integer percentage between 25 and 100, inclusive) to the `TranscriptionConfig`:

```python
config=aai.TranscriptionConfig(
  content_safety=True,
  content_safety_confidence=80,  # only include labels with a confidence greater than 80%
)
```

</details>
<details>
  <summary>Analyze the sentiment of sentences in a transcript</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(sentiment_analysis=True)
)

for sentiment_result in transcript.sentiment_analysis:
  print(sentiment_result.text)
  print(sentiment_result.sentiment)  # POSITIVE, NEUTRAL, or NEGATIVE
  print(sentiment_result.confidence)
  print(f"Timestamp: {sentiment_result.start} - {sentiment_result.end}")
```

If `speaker_labels` is also enabled, then each sentiment analysis result will also include a `speaker` field.

```python
# ...

config = aai.TranscriptionConfig(sentiment_analysis=True, speaker_labels=True)

# ...

for sentiment_result in transcript.sentiment_analysis:
  print(sentiment_result.speaker)
```

[Read more about sentiment analysis here.](https://www.assemblyai.com/docs/Models/sentiment_analysis)

</details>
<details>
  <summary>Identify entities in a transcript</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(entity_detection=True)
)

for entity in transcript.entities:
  print(entity.text) # i.e. "Dan Gilbert"
  print(entity.entity_type) # i.e. EntityType.person
  print(f"Timestamp: {entity.start} - {entity.end}")
```

[Read more about entity detection here.](https://www.assemblyai.com/docs/Models/entity_detection)

</details>
<details>
  <summary>Detect topics in a transcript (IAB Classification)</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(iab_categories=True)
)

# Get the parts of the transcript that were tagged with topics
for result in transcript.iab_categories.results:
  print(result.text)
  print(f"Timestamp: {result.timestamp.start} - {result.timestamp.end}")
  for label in result.labels:
    print(label.label)  # topic
    print(label.relevance)  # how relevant the label is for the portion of text

# Get a summary of all topics in the transcript
for label, relevance in transcript.iab_categories.summary.items():
  print(f"Audio is {relevance * 100}% relevant to {label}")
```

[Read more about IAB classification here.](https://www.assemblyai.com/docs/Models/iab_classification)

</details>
<details>
  <summary>Identify important words and phrases in a transcript</summary>

```python
import assemblyai as aai

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
  "https://example.org/audio.mp3",
  config=aai.TranscriptionConfig(auto_highlights=True)
)

for result in transcript.auto_highlights.results:
  print(result.text) # the important phrase
  print(result.rank) # relevancy of the phrase
  print(result.count) # number of instances of the phrase
  for timestamp in result.timestamps:
    print(f"Timestamp: {timestamp.start} - {timestamp.end}")

```

[Read more about auto highlights here.](https://www.assemblyai.com/docs/Models/key_phrases)

</details>

---

### **Real-Time Examples**

[Read more about our Real-Time service.](https://www.assemblyai.com/docs/Guides/real-time_streaming_transcription)

<details>
  <summary>Stream your microphone in real-time</summary>

```python
import assemblyai as aai

def on_open(session_opened: aai.RealtimeSessionOpened):
  "This function is called when the connection has been established."

  print("Session ID:", session_opened.session_id)

def on_data(transcript: aai.RealtimeTranscript):
  "This function is called when a new transcript has been received."

  if not transcript.text:
    return

  if isinstance(transcript, aai.RealtimeFinalTranscript):
    print(transcript.text, end="\r\n")
  else:
    print(transcript.text, end="\r")

def on_error(error: aai.RealtimeError):
  "This function is called when an error occurs."

  print("An error occured:", error)

def on_close():
  "This function is called when the connection has been closed."

  print("Closing Session")


# Create the Real-Time transcriber
transcriber = aai.RealtimeTranscriber(
  on_data=on_data,
  on_error=on_error,
  sample_rate=44_100,
  on_open=on_open, # optional
  on_close=on_close, # optional
)

# Start the connection
transcriber.connect()

# Open a microphone stream
microphone_stream = aai.extras.MicrophoneStream()

# Press CTRL+C to abort
transcriber.stream(microphone_stream)

transcriber.close()
```

</details>

<details>
  <summary>Transcribe a local audio file in real-time</summary>

```python
import assemblyai as aai


def on_data(transcript: aai.RealtimeTranscript):
  "This function is called when a new transcript has been received."

  if not transcript.text:
    return

  if isinstance(transcript, aai.RealtimeFinalTranscript):
    print(transcript.text, end="\r\n")
  else:
    print(transcript.text, end="\r")

def on_error(error: aai.RealtimeError):
  "This function is called when the connection has been closed."

  print("An error occured:", error)


# Create the Real-Time transcriber
transcriber = aai.RealtimeTranscriber(
  on_data=on_data,
  on_error=on_error,
  sample_rate=44_100,
)

# Start the connection
transcriber.connect()

# Only WAV/PCM16 single channel supported for now
file_stream = aai.extras.stream_file(
  filepath="audio.wav",
  sample_rate=44_100,
)

transcriber.stream(file_stream)

transcriber.close()
```

</details>

<details>
  <summary>End-of-utterance controls</summary>

```python
transcriber = aai.RealtimeTranscriber(...)

# Manually end an utterance and immediately produce a final transcript.
transcriber.force_end_utterance()

# Configure the threshold for automatic utterance detection.
transcriber = aai.RealtimeTranscriber(
    ...,
    end_utterance_silence_threshold=500
)

# Can be changed any time during a session.
# The valid range is between 0 and 20000.
transcriber.configure_end_utterance_silence_threshold(300)
```

</details>

<details>
  <summary>Disable partial transcripts</summary>

```python
# Set disable_partial_transcripts to `True`
transcriber = aai.RealtimeTranscriber(
    ...,
    disable_partial_transcripts=True
)
```

</details>

<details>
  <summary>Enable extra session information</summary>

```python
# Define a callback to handle the extra session information message
def on_extra_session_information(data: aai.RealtimeSessionInformation):
    "This function is called when a session information message has been received."

    print(data.audio_duration_seconds)

# Configure the RealtimeTranscriber
transcriber = aai.RealtimeTranscriber(
    ...,
    on_extra_session_information=on_extra_session_information,
)
```

</details>

---

### **Change the default settings**

You'll find the `Settings` class with all default values in [types.py](./assemblyai/types.py).

<details>
  <summary>Change the default timeout and polling interval</summary>

```python
import assemblyai as aai

# The HTTP timeout in seconds for general requests, default is 30.0
aai.settings.http_timeout = 60.0

# The polling interval in seconds for long-running requests, default is 3.0
aai.settings.polling_interval = 10.0
```

</details>

---

## Playground

Visit our Playground to try our all of our Speech AI models and LeMUR for free:

- [Playground](https://www.assemblyai.com/playground)

# Advanced

## How the SDK handles Default Configurations

### Defining Defaults

When no `TranscriptionConfig` is being passed to the `Transcriber` or its methods, it will use a default instance of a `TranscriptionConfig`.

If you would like to re-use the same `TranscriptionConfig` for all your transcriptions,
you can set it on the `Transcriber` directly:

```python
config = aai.TranscriptionConfig(punctuate=False, format_text=False)

transcriber = aai.Transcriber(config=config)

# will use the same config for all `.transcribe*(...)` operations
transcriber.transcribe("https://example.org/audio.wav")
```

### Overriding Defaults

You can override the default configuration later via the `.config` property of the `Transcriber`:

```python
transcriber = aai.Transcriber()

# override the `Transcriber`'s config with a new config
transcriber.config = aai.TranscriptionConfig(punctuate=False, format_text=False)
```

In case you want to override the `Transcriber`'s configuration for a specific operation with a different one, you can do so via the `config` parameter of a `.transcribe*(...)` method:

```python
config = aai.TranscriptionConfig(punctuate=False, format_text=False)
# set a default configuration
transcriber = aai.Transcriber(config=config)

transcriber.transcribe(
    "https://example.com/audio.mp3",
    # overrides the above configuration on the `Transcriber` with the following
    config=aai.TranscriptionConfig(dual_channel=True, disfluencies=True)
)
```

## Synchronous vs Asynchronous

Currently, the SDK provides two ways to transcribe audio files.

The synchronous approach halts the application's flow until the transcription has been completed.

The asynchronous approach allows the application to continue running while the transcription is being processed. The caller receives a [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html) object which can be used to check the status of the transcription at a later time.

You can identify those two approaches by the `_async` suffix in the `Transcriber`'s method name (e.g. `transcribe` vs `transcribe_async`).

## Getting the HTTP status code

There are two ways of accessing the HTTP status code:

- All custom AssemblyAI Error classes have a `status_code` attribute.
- The latest HTTP response is stored in `aai.Client.get_default().latest_response` after every API call. This approach works also if no Exception is thrown.

```python
transcriber = aai.Transcriber()

# Option 1: Catch the error
try:
    transcript = transcriber.submit("./example.mp3")
except aai.AssemblyAIError as e:
    print(e.status_code)

# Option 2: Access the latest response through the client
client = aai.Client.get_default()

try:
    transcript = transcriber.submit("./example.mp3")
except:
    print(client.last_response)
    print(client.last_response.status_code)
```

## Polling Intervals

By default we poll the `Transcript`'s status each `3s`. In case you would like to adjust that interval:

```python
import assemblyai as aai

aai.settings.polling_interval = 1.0
```

## Retrieving Existing Transcripts

### Retrieving a Single Transcript

If you previously created a transcript, you can use its ID to retrieve it later.

```python
import assemblyai as aai

transcript = aai.Transcript.get_by_id("<TRANSCRIPT_ID>")

print(transcript.id)
print(transcript.text)
```

### Retrieving Multiple Transcripts as a Group

You can also retrieve multiple existing transcripts and combine them into a single `TranscriptGroup` object. This allows you to perform operations on the transcript group as a single unit, such as querying the combined transcripts with LeMUR.

```python
import assemblyai as aai

transcript_group = aai.TranscriptGroup.get_by_ids(["<TRANSCRIPT_ID_1>", "<TRANSCRIPT_ID_2>"])

summary = transcript_group.lemur.summarize(context="Customers asking for cars", answer_format="TLDR")

print(summary)
```

### Retrieving Transcripts Asynchronously

Both `Transcript.get_by_id` and `TranscriptGroup.get_by_ids` have asynchronous counterparts, `Transcript.get_by_id_async` and `TranscriptGroup.get_by_ids_async`, respectively. These functions immediately return a `Future` object, rather than blocking until the transcript(s) are retrieved.

See the above section on [Synchronous vs Asynchronous](#synchronous-vs-asynchronous) for more information.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AssemblyAI/assemblyai-python-sdk",
    "name": "assemblyai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "AssemblyAI",
    "author_email": "engineering.sdk@assemblyai.com",
    "download_url": "https://files.pythonhosted.org/packages/8f/4a/9dd5d3fc1df248677aa7c7d03dd56806301d55580dbad90d2d3043c80e69/assemblyai-0.37.0.tar.gz",
    "platform": null,
    "description": "<img src=\"https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/assemblyai.png?raw=true\" width=\"500\"/>\n\n---\n\n[![CI Passing](https://github.com/AssemblyAI/assemblyai-python-sdk/actions/workflows/test.yml/badge.svg)](https://github.com/AssemblyAI/assemblyai-python-sdk/actions/workflows/test.yml)\n[![GitHub License](https://img.shields.io/github/license/AssemblyAI/assemblyai-python-sdk)](https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/LICENSE)\n[![PyPI version](https://badge.fury.io/py/assemblyai.svg)](https://badge.fury.io/py/assemblyai)\n[![PyPI Python Versions](https://img.shields.io/pypi/pyversions/assemblyai)](https://pypi.python.org/pypi/assemblyai/)\n![PyPI - Wheel](https://img.shields.io/pypi/wheel/assemblyai)\n[![AssemblyAI Twitter](https://img.shields.io/twitter/follow/AssemblyAI?label=%40AssemblyAI&style=social)](https://twitter.com/AssemblyAI)\n[![AssemblyAI YouTube](https://img.shields.io/youtube/channel/subscribers/UCtatfZMf-8EkIwASXM4ts0A)](https://www.youtube.com/@AssemblyAI)\n[![Discord](https://img.shields.io/discord/875120158014853141?logo=discord&label=Discord&link=https%3A%2F%2Fdiscord.com%2Fchannels%2F875120158014853141&style=social)\n](https://assemblyai.com/discord)\n\n# AssemblyAI's Python SDK\n\n> _Build with AI models that can transcribe and understand audio_\n\nWith a single API call, get access to AI models built on the latest AI breakthroughs to transcribe and understand audio and speech data securely at large scale.\n\n# Overview\n\n- [AssemblyAI's Python SDK](#assemblyais-python-sdk)\n- [Overview](#overview)\n- [Documentation](#documentation)\n- [Quick Start](#quick-start)\n  - [Installation](#installation)\n  - [Examples](#examples)\n    - [**Core Examples**](#core-examples)\n    - [**LeMUR Examples**](#lemur-examples)\n    - [**Audio Intelligence Examples**](#audio-intelligence-examples)\n    - [**Real-Time Examples**](#real-time-examples)\n  - [Playgrounds](#playgrounds)\n- [Advanced](#advanced)\n  - [How the SDK handles Default Configurations](#how-the-sdk-handles-default-configurations)\n    - [Defining Defaults](#defining-defaults)\n    - [Overriding Defaults](#overriding-defaults)\n  - [Synchronous vs Asynchronous](#synchronous-vs-asynchronous)\n  - [Polling Intervals](#polling-intervals)\n  - [Retrieving Existing Transcripts](#retrieving-existing-transcripts)\n    - [Retrieving a Single Transcript](#retrieving-a-single-transcript)\n    - [Retrieving Multiple Transcripts as a Group](#retrieving-multiple-transcripts-as-a-group)\n    - [Retrieving Transcripts Asynchronously](#retrieving-transcripts-asynchronously)\n\n# Documentation\n\nVisit our [AssemblyAI API Documentation](https://www.assemblyai.com/docs) to get an overview of our models!\n\n# Quick Start\n\n## Installation\n\n```bash\npip install -U assemblyai\n```\n\n## Examples\n\nBefore starting, you need to set the API key. If you don't have one yet, [**sign up for one**](https://www.assemblyai.com/dashboard/signup)!\n\n```python\nimport assemblyai as aai\n\n# set the API key\naai.settings.api_key = f\"{ASSEMBLYAI_API_KEY}\"\n```\n\n---\n\n### **Core Examples**\n\n<details>\n  <summary>Transcribe a local audio file</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"./my-local-audio-file.wav\")\n\nprint(transcript.text)\n```\n\n</details>\n\n<details>\n  <summary>Transcribe an URL</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"https://example.org/audio.mp3\")\n\nprint(transcript.text)\n```\n\n</details>\n\n<details>\n  <summary>Transcribe binary data</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\n\n# Binary data is supported directly:\ntranscript = transcriber.transcribe(data)\n\n# Or: Upload data separately:\nupload_url = transcriber.upload_file(data)\ntranscript = transcriber.transcribe(upload_url)\n```\n\n</details>\n\n<details>\n  <summary>Export subtitles of an audio file</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"https://example.org/audio.mp3\")\n\n# in SRT format\nprint(transcript.export_subtitles_srt())\n\n# in VTT format\nprint(transcript.export_subtitles_vtt())\n```\n\n</details>\n\n<details>\n  <summary>List all sentences and paragraphs</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"https://example.org/audio.mp3\")\n\nsentences = transcript.get_sentences()\nfor sentence in sentences:\n  print(sentence.text)\n\nparagraphs = transcript.get_paragraphs()\nfor paragraph in paragraphs:\n  print(paragraph.text)\n```\n\n</details>\n\n<details>\n  <summary>Search for words in a transcript</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"https://example.org/audio.mp3\")\n\nmatches = transcript.word_search([\"price\", \"product\"])\n\nfor match in matches:\n  print(f\"Found '{match.text}' {match.count} times in the transcript\")\n```\n\n</details>\n\n<details>\n  <summary>Add custom spellings on a transcript</summary>\n\n```python\nimport assemblyai as aai\n\nconfig = aai.TranscriptionConfig()\nconfig.set_custom_spelling(\n  {\n    \"Kubernetes\": [\"k8s\"],\n    \"SQL\": [\"Sequel\"],\n  }\n)\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"https://example.org/audio.mp3\", config)\n\nprint(transcript.text)\n```\n\n</details>\n\n<details>\n  <summary>Upload a file</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\nupload_url = transcriber.upload_file(data)\n```\n\n</details>\n\n<details>\n  <summary>Delete a transcript</summary>\n\n```python\nimport assemblyai as aai\n\ntranscript = aai.Transcriber().transcribe(audio_url)\n\naai.Transcript.delete_by_id(transcript.id)\n```\n\n</details>\n\n<details>\n  <summary>List transcripts</summary>\n\nThis returns a page of transcripts you created.\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\n\npage = transcriber.list_transcripts()\nprint(page.page_details)  # Page details\nprint(page.transcripts)  # List of transcripts\n```\n\nYou can apply filter parameters:\n\n```python\nparams = aai.ListTranscriptParameters(\n    limit=3,\n    status=aai.TranscriptStatus.completed,\n)\npage = transcriber.list_transcripts(params)\n```\n\nYou can also paginate over all pages by using the helper property `before_id_of_prev_url`.\n\nThe `prev_url` always points to a page with older transcripts. If you extract the `before_id`\nof the `prev_url` query parameters, you can paginate over all pages from newest to oldest.\n\n```python\ntranscriber = aai.Transcriber()\n\nparams = aai.ListTranscriptParameters()\n\npage = transcriber.list_transcripts(params)\nwhile page.page_details.before_id_of_prev_url is not None:\n    params.before_id = page.page_details.before_id_of_prev_url\n    page = transcriber.list_transcripts(params)\n```\n\n</details>\n\n---\n\n### **LeMUR Examples**\n\n<details>\n  <summary>Use LeMUR to summarize an audio file</summary>\n\n```python\nimport assemblyai as aai\n\naudio_file = \"https://assembly.ai/sports_injuries.mp3\"\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(audio_file)\n\nprompt = \"Provide a brief summary of the transcript.\"\n\nresult = transcript.lemur.task(\n    prompt, final_model=aai.LemurModel.claude3_5_sonnet\n)\n\nprint(result.response)\n```\n\nOr use the specialized Summarization endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:\n\n```python\nimport assemblyai as aai\n\naudio_url = \"https://assembly.ai/meeting.mp4\"\ntranscript = aai.Transcriber().transcribe(audio_url)\n\nresult = transcript.lemur.summarize(\n    final_model=aai.LemurModel.claude3_5_sonnet,\n    context=\"A GitLab meeting to discuss logistics\",\n    answer_format=\"TLDR\"\n)\n\nprint(result.response)\n```\n\n</details>\n\n<details>\n  <summary>Use LeMUR to ask questions about your audio data</summary>\n\n```python\nimport assemblyai as aai\n\naudio_file = \"https://assembly.ai/sports_injuries.mp3\"\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(audio_file)\n\nprompt = \"What is a runner's knee?\"\n\nresult = transcript.lemur.task(\n    prompt, final_model=aai.LemurModel.claude3_5_sonnet\n)\n\nprint(result.response)\n```\n\nOr use the specialized Q&A endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"https://example.org/customer.mp3\")\n\n# ask some questions\nquestions = [\n    aai.LemurQuestion(question=\"What car was the customer interested in?\"),\n    aai.LemurQuestion(question=\"What price range is the customer looking for?\"),\n]\n\nresult = transcript.lemur.question(\n  final_model=aai.LemurModel.claude3_5_sonnet,\n  questions=questions)\n\nfor q in result.response:\n    print(f\"Question: {q.question}\")\n    print(f\"Answer: {q.answer}\")\n```\n\n</details>\n\n<details>\n  <summary>Use LeMUR with customized input text</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\nconfig = aai.TranscriptionConfig(\n  speaker_labels=True,\n)\ntranscript = transcriber.transcribe(\"https://example.org/customer.mp3\", config=config)\n\n# Example converting speaker label utterances into LeMUR input text\ntext = \"\"\n\nfor utt in transcript.utterances:\n    text += f\"Speaker {utt.speaker}:\\n{utt.text}\\n\"\n\nresult = aai.Lemur().task(\n  \"You are a helpful coach. Provide an analysis of the transcript \"\n  \"and offer areas to improve with exact quotes. Include no preamble. \"\n  \"Start with an overall summary then get into the examples with feedback.\",\n  input_text=text,\n  final_model=aai.LemurModel.claude3_5_sonnet\n)\n\nprint(result.response)\n```\n\n</details>\n\n<details>\n  <summary>Apply LeMUR to multiple transcripts</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript_group = transcriber.transcribe_group(\n    [\n        \"https://example.org/customer1.mp3\",\n        \"https://example.org/customer2.mp3\",\n    ],\n)\n\nresult = transcript_group.lemur.task(\n  context=\"These are calls of customers asking for cars. Summarize all calls and create a TLDR.\",\n  final_model=aai.LemurModel.claude3_5_sonnet\n)\n\nprint(result.response)\n```\n\n</details>\n\n<details>\n  <summary>Delete data previously sent to LeMUR</summary>\n\n```python\nimport assemblyai as aai\n\n# Create a transcript and a corresponding LeMUR request that may contain senstive information.\ntranscriber = aai.Transcriber()\ntranscript_group = transcriber.transcribe_group(\n  [\n    \"https://example.org/customer1.mp3\",\n  ],\n)\n\nresult = transcript_group.lemur.summarize(\n  context=\"Customers providing sensitive, personally identifiable information\",\n  answer_format=\"TLDR\"\n)\n\n# Get the request ID from the LeMUR response\nrequest_id = result.request_id\n\n# Now we can delete the data about this request\ndeletion_result = aai.Lemur.purge_request_data(request_id)\nprint(deletion_result)\n```\n\n</details>\n\n---\n\n### **Audio Intelligence Examples**\n\n<details>\n  <summary>PII Redact a transcript</summary>\n\n```python\nimport assemblyai as aai\n\nconfig = aai.TranscriptionConfig()\nconfig.set_redact_pii(\n  # What should be redacted\n  policies=[\n      aai.PIIRedactionPolicy.credit_card_number,\n      aai.PIIRedactionPolicy.email_address,\n      aai.PIIRedactionPolicy.location,\n      aai.PIIRedactionPolicy.person_name,\n      aai.PIIRedactionPolicy.phone_number,\n  ],\n  # How it should be redacted\n  substitution=aai.PIISubstitutionPolicy.hash,\n)\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\"https://example.org/audio.mp3\", config)\n```\n\nTo request a copy of the original audio file with the redacted information \"beeped\" out, set `redact_pii_audio=True` in the config.\nOnce the `Transcript` object is returned, you can access the URL of the redacted audio file with `get_redacted_audio_url`, or save the redacted audio directly to disk with `save_redacted_audio`.\n\n```python\nimport assemblyai as aai\n\ntranscript = aai.Transcriber().transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(\n    redact_pii=True,\n    redact_pii_policies=[aai.PIIRedactionPolicy.person_name],\n    redact_pii_audio=True\n  )\n)\n\nredacted_audio_url = transcript.get_redacted_audio_url()\ntranscript.save_redacted_audio(\"redacted_audio.mp3\")\n```\n\n[Read more about PII redaction here.](https://www.assemblyai.com/docs/Models/pii_redaction)\n\n</details>\n<details>\n  <summary>Summarize the content of a transcript over time</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(auto_chapters=True)\n)\n\nfor chapter in transcript.chapters:\n  print(f\"Summary: {chapter.summary}\")  # A one paragraph summary of the content spoken during this timeframe\n  print(f\"Start: {chapter.start}, End: {chapter.end}\")  # Timestamps (in milliseconds) of the chapter\n  print(f\"Healine: {chapter.headline}\")  # A single sentence summary of the content spoken during this timeframe\n  print(f\"Gist: {chapter.gist}\")  # An ultra-short summary, just a few words, of the content spoken during this timeframe\n```\n\n[Read more about auto chapters here.](https://www.assemblyai.com/docs/Models/auto_chapters)\n\n</details>\n\n<details>\n  <summary>Summarize the content of a transcript</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(summarization=True)\n)\n\nprint(transcript.summary)\n```\n\nBy default, the summarization model will be `informative` and the summarization type will be `bullets`. [Read more about summarization models and types here](https://www.assemblyai.com/docs/Models/summarization#types-and-models).\n\nTo change the model and/or type, pass additional parameters to the `TranscriptionConfig`:\n\n```python\nconfig=aai.TranscriptionConfig(\n  summarization=True,\n  summary_model=aai.SummarizationModel.catchy,\n  summary_type=aai.SummarizationType.headline\n)\n```\n\n</details>\n<details>\n  <summary>Detect sensitive content in a transcript</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(content_safety=True)\n)\n\n\n# Get the parts of the transcript which were flagged as sensitive\nfor result in transcript.content_safety.results:\n  print(result.text)  # sensitive text snippet\n  print(result.timestamp.start)\n  print(result.timestamp.end)\n\n  for label in result.labels:\n    print(label.label)  # content safety category\n    print(label.confidence) # model's confidence that the text is in this category\n    print(label.severity) # severity of the text in relation to the category\n\n# Get the confidence of the most common labels in relation to the entire audio file\nfor label, confidence in transcript.content_safety.summary.items():\n  print(f\"{confidence * 100}% confident that the audio contains {label}\")\n\n# Get the overall severity of the most common labels in relation to the entire audio file\nfor label, severity_confidence in transcript.content_safety.severity_score_summary.items():\n  print(f\"{severity_confidence.low * 100}% confident that the audio contains low-severity {label}\")\n  print(f\"{severity_confidence.medium * 100}% confident that the audio contains mid-severity {label}\")\n  print(f\"{severity_confidence.high * 100}% confident that the audio contains high-severity {label}\")\n\n```\n\n[Read more about the content safety categories.](https://www.assemblyai.com/docs/Models/content_moderation#all-labels-supported-by-the-model)\n\nBy default, the content safety model will only include labels with a confidence greater than 0.5 (50%). To change this, pass `content_safety_confidence` (as an integer percentage between 25 and 100, inclusive) to the `TranscriptionConfig`:\n\n```python\nconfig=aai.TranscriptionConfig(\n  content_safety=True,\n  content_safety_confidence=80,  # only include labels with a confidence greater than 80%\n)\n```\n\n</details>\n<details>\n  <summary>Analyze the sentiment of sentences in a transcript</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(sentiment_analysis=True)\n)\n\nfor sentiment_result in transcript.sentiment_analysis:\n  print(sentiment_result.text)\n  print(sentiment_result.sentiment)  # POSITIVE, NEUTRAL, or NEGATIVE\n  print(sentiment_result.confidence)\n  print(f\"Timestamp: {sentiment_result.start} - {sentiment_result.end}\")\n```\n\nIf `speaker_labels` is also enabled, then each sentiment analysis result will also include a `speaker` field.\n\n```python\n# ...\n\nconfig = aai.TranscriptionConfig(sentiment_analysis=True, speaker_labels=True)\n\n# ...\n\nfor sentiment_result in transcript.sentiment_analysis:\n  print(sentiment_result.speaker)\n```\n\n[Read more about sentiment analysis here.](https://www.assemblyai.com/docs/Models/sentiment_analysis)\n\n</details>\n<details>\n  <summary>Identify entities in a transcript</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(entity_detection=True)\n)\n\nfor entity in transcript.entities:\n  print(entity.text) # i.e. \"Dan Gilbert\"\n  print(entity.entity_type) # i.e. EntityType.person\n  print(f\"Timestamp: {entity.start} - {entity.end}\")\n```\n\n[Read more about entity detection here.](https://www.assemblyai.com/docs/Models/entity_detection)\n\n</details>\n<details>\n  <summary>Detect topics in a transcript (IAB Classification)</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(iab_categories=True)\n)\n\n# Get the parts of the transcript that were tagged with topics\nfor result in transcript.iab_categories.results:\n  print(result.text)\n  print(f\"Timestamp: {result.timestamp.start} - {result.timestamp.end}\")\n  for label in result.labels:\n    print(label.label)  # topic\n    print(label.relevance)  # how relevant the label is for the portion of text\n\n# Get a summary of all topics in the transcript\nfor label, relevance in transcript.iab_categories.summary.items():\n  print(f\"Audio is {relevance * 100}% relevant to {label}\")\n```\n\n[Read more about IAB classification here.](https://www.assemblyai.com/docs/Models/iab_classification)\n\n</details>\n<details>\n  <summary>Identify important words and phrases in a transcript</summary>\n\n```python\nimport assemblyai as aai\n\ntranscriber = aai.Transcriber()\ntranscript = transcriber.transcribe(\n  \"https://example.org/audio.mp3\",\n  config=aai.TranscriptionConfig(auto_highlights=True)\n)\n\nfor result in transcript.auto_highlights.results:\n  print(result.text) # the important phrase\n  print(result.rank) # relevancy of the phrase\n  print(result.count) # number of instances of the phrase\n  for timestamp in result.timestamps:\n    print(f\"Timestamp: {timestamp.start} - {timestamp.end}\")\n\n```\n\n[Read more about auto highlights here.](https://www.assemblyai.com/docs/Models/key_phrases)\n\n</details>\n\n---\n\n### **Real-Time Examples**\n\n[Read more about our Real-Time service.](https://www.assemblyai.com/docs/Guides/real-time_streaming_transcription)\n\n<details>\n  <summary>Stream your microphone in real-time</summary>\n\n```python\nimport assemblyai as aai\n\ndef on_open(session_opened: aai.RealtimeSessionOpened):\n  \"This function is called when the connection has been established.\"\n\n  print(\"Session ID:\", session_opened.session_id)\n\ndef on_data(transcript: aai.RealtimeTranscript):\n  \"This function is called when a new transcript has been received.\"\n\n  if not transcript.text:\n    return\n\n  if isinstance(transcript, aai.RealtimeFinalTranscript):\n    print(transcript.text, end=\"\\r\\n\")\n  else:\n    print(transcript.text, end=\"\\r\")\n\ndef on_error(error: aai.RealtimeError):\n  \"This function is called when an error occurs.\"\n\n  print(\"An error occured:\", error)\n\ndef on_close():\n  \"This function is called when the connection has been closed.\"\n\n  print(\"Closing Session\")\n\n\n# Create the Real-Time transcriber\ntranscriber = aai.RealtimeTranscriber(\n  on_data=on_data,\n  on_error=on_error,\n  sample_rate=44_100,\n  on_open=on_open, # optional\n  on_close=on_close, # optional\n)\n\n# Start the connection\ntranscriber.connect()\n\n# Open a microphone stream\nmicrophone_stream = aai.extras.MicrophoneStream()\n\n# Press CTRL+C to abort\ntranscriber.stream(microphone_stream)\n\ntranscriber.close()\n```\n\n</details>\n\n<details>\n  <summary>Transcribe a local audio file in real-time</summary>\n\n```python\nimport assemblyai as aai\n\n\ndef on_data(transcript: aai.RealtimeTranscript):\n  \"This function is called when a new transcript has been received.\"\n\n  if not transcript.text:\n    return\n\n  if isinstance(transcript, aai.RealtimeFinalTranscript):\n    print(transcript.text, end=\"\\r\\n\")\n  else:\n    print(transcript.text, end=\"\\r\")\n\ndef on_error(error: aai.RealtimeError):\n  \"This function is called when the connection has been closed.\"\n\n  print(\"An error occured:\", error)\n\n\n# Create the Real-Time transcriber\ntranscriber = aai.RealtimeTranscriber(\n  on_data=on_data,\n  on_error=on_error,\n  sample_rate=44_100,\n)\n\n# Start the connection\ntranscriber.connect()\n\n# Only WAV/PCM16 single channel supported for now\nfile_stream = aai.extras.stream_file(\n  filepath=\"audio.wav\",\n  sample_rate=44_100,\n)\n\ntranscriber.stream(file_stream)\n\ntranscriber.close()\n```\n\n</details>\n\n<details>\n  <summary>End-of-utterance controls</summary>\n\n```python\ntranscriber = aai.RealtimeTranscriber(...)\n\n# Manually end an utterance and immediately produce a final transcript.\ntranscriber.force_end_utterance()\n\n# Configure the threshold for automatic utterance detection.\ntranscriber = aai.RealtimeTranscriber(\n    ...,\n    end_utterance_silence_threshold=500\n)\n\n# Can be changed any time during a session.\n# The valid range is between 0 and 20000.\ntranscriber.configure_end_utterance_silence_threshold(300)\n```\n\n</details>\n\n<details>\n  <summary>Disable partial transcripts</summary>\n\n```python\n# Set disable_partial_transcripts to `True`\ntranscriber = aai.RealtimeTranscriber(\n    ...,\n    disable_partial_transcripts=True\n)\n```\n\n</details>\n\n<details>\n  <summary>Enable extra session information</summary>\n\n```python\n# Define a callback to handle the extra session information message\ndef on_extra_session_information(data: aai.RealtimeSessionInformation):\n    \"This function is called when a session information message has been received.\"\n\n    print(data.audio_duration_seconds)\n\n# Configure the RealtimeTranscriber\ntranscriber = aai.RealtimeTranscriber(\n    ...,\n    on_extra_session_information=on_extra_session_information,\n)\n```\n\n</details>\n\n---\n\n### **Change the default settings**\n\nYou'll find the `Settings` class with all default values in [types.py](./assemblyai/types.py).\n\n<details>\n  <summary>Change the default timeout and polling interval</summary>\n\n```python\nimport assemblyai as aai\n\n# The HTTP timeout in seconds for general requests, default is 30.0\naai.settings.http_timeout = 60.0\n\n# The polling interval in seconds for long-running requests, default is 3.0\naai.settings.polling_interval = 10.0\n```\n\n</details>\n\n---\n\n## Playground\n\nVisit our Playground to try our all of our Speech AI models and LeMUR for free:\n\n- [Playground](https://www.assemblyai.com/playground)\n\n# Advanced\n\n## How the SDK handles Default Configurations\n\n### Defining Defaults\n\nWhen no `TranscriptionConfig` is being passed to the `Transcriber` or its methods, it will use a default instance of a `TranscriptionConfig`.\n\nIf you would like to re-use the same `TranscriptionConfig` for all your transcriptions,\nyou can set it on the `Transcriber` directly:\n\n```python\nconfig = aai.TranscriptionConfig(punctuate=False, format_text=False)\n\ntranscriber = aai.Transcriber(config=config)\n\n# will use the same config for all `.transcribe*(...)` operations\ntranscriber.transcribe(\"https://example.org/audio.wav\")\n```\n\n### Overriding Defaults\n\nYou can override the default configuration later via the `.config` property of the `Transcriber`:\n\n```python\ntranscriber = aai.Transcriber()\n\n# override the `Transcriber`'s config with a new config\ntranscriber.config = aai.TranscriptionConfig(punctuate=False, format_text=False)\n```\n\nIn case you want to override the `Transcriber`'s configuration for a specific operation with a different one, you can do so via the `config` parameter of a `.transcribe*(...)` method:\n\n```python\nconfig = aai.TranscriptionConfig(punctuate=False, format_text=False)\n# set a default configuration\ntranscriber = aai.Transcriber(config=config)\n\ntranscriber.transcribe(\n    \"https://example.com/audio.mp3\",\n    # overrides the above configuration on the `Transcriber` with the following\n    config=aai.TranscriptionConfig(dual_channel=True, disfluencies=True)\n)\n```\n\n## Synchronous vs Asynchronous\n\nCurrently, the SDK provides two ways to transcribe audio files.\n\nThe synchronous approach halts the application's flow until the transcription has been completed.\n\nThe asynchronous approach allows the application to continue running while the transcription is being processed. The caller receives a [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html) object which can be used to check the status of the transcription at a later time.\n\nYou can identify those two approaches by the `_async` suffix in the `Transcriber`'s method name (e.g. `transcribe` vs `transcribe_async`).\n\n## Getting the HTTP status code\n\nThere are two ways of accessing the HTTP status code:\n\n- All custom AssemblyAI Error classes have a `status_code` attribute.\n- The latest HTTP response is stored in `aai.Client.get_default().latest_response` after every API call. This approach works also if no Exception is thrown.\n\n```python\ntranscriber = aai.Transcriber()\n\n# Option 1: Catch the error\ntry:\n    transcript = transcriber.submit(\"./example.mp3\")\nexcept aai.AssemblyAIError as e:\n    print(e.status_code)\n\n# Option 2: Access the latest response through the client\nclient = aai.Client.get_default()\n\ntry:\n    transcript = transcriber.submit(\"./example.mp3\")\nexcept:\n    print(client.last_response)\n    print(client.last_response.status_code)\n```\n\n## Polling Intervals\n\nBy default we poll the `Transcript`'s status each `3s`. In case you would like to adjust that interval:\n\n```python\nimport assemblyai as aai\n\naai.settings.polling_interval = 1.0\n```\n\n## Retrieving Existing Transcripts\n\n### Retrieving a Single Transcript\n\nIf you previously created a transcript, you can use its ID to retrieve it later.\n\n```python\nimport assemblyai as aai\n\ntranscript = aai.Transcript.get_by_id(\"<TRANSCRIPT_ID>\")\n\nprint(transcript.id)\nprint(transcript.text)\n```\n\n### Retrieving Multiple Transcripts as a Group\n\nYou can also retrieve multiple existing transcripts and combine them into a single `TranscriptGroup` object. This allows you to perform operations on the transcript group as a single unit, such as querying the combined transcripts with LeMUR.\n\n```python\nimport assemblyai as aai\n\ntranscript_group = aai.TranscriptGroup.get_by_ids([\"<TRANSCRIPT_ID_1>\", \"<TRANSCRIPT_ID_2>\"])\n\nsummary = transcript_group.lemur.summarize(context=\"Customers asking for cars\", answer_format=\"TLDR\")\n\nprint(summary)\n```\n\n### Retrieving Transcripts Asynchronously\n\nBoth `Transcript.get_by_id` and `TranscriptGroup.get_by_ids` have asynchronous counterparts, `Transcript.get_by_id_async` and `TranscriptGroup.get_by_ids_async`, respectively. These functions immediately return a `Future` object, rather than blocking until the transcript(s) are retrieved.\n\nSee the above section on [Synchronous vs Asynchronous](#synchronous-vs-asynchronous) for more information.\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "AssemblyAI Python SDK",
    "version": "0.37.0",
    "project_urls": {
        "API Documentation": "https://www.assemblyai.com/docs/",
        "Code": "https://github.com/AssemblyAI/assemblyai-python-sdk",
        "Documentation": "https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/README.md",
        "Homepage": "https://github.com/AssemblyAI/assemblyai-python-sdk",
        "Issues": "https://github.com/AssemblyAI/assemblyai-python-sdk/issues",
        "Website": "https://assemblyai.com/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "111a57f377c81a0801c2282328c4f9a4d0071407564f41200b22a8bbd3b76c93",
                "md5": "153f1b2b36b67519ac3e4d081e1c8e37",
                "sha256": "52b111ba75eb6175b617c7ccec05cfef086fe30c24d5d998071eb2c99fc59f39"
            },
            "downloads": -1,
            "filename": "assemblyai-0.37.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "153f1b2b36b67519ac3e4d081e1c8e37",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 44157,
            "upload_time": "2025-02-03T10:08:09",
            "upload_time_iso_8601": "2025-02-03T10:08:09.072120Z",
            "url": "https://files.pythonhosted.org/packages/11/1a/57f377c81a0801c2282328c4f9a4d0071407564f41200b22a8bbd3b76c93/assemblyai-0.37.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8f4a9dd5d3fc1df248677aa7c7d03dd56806301d55580dbad90d2d3043c80e69",
                "md5": "0712baef5acac76a9bd0f4f57f328711",
                "sha256": "4f1e57e906564baf50424a7779bbcd0b8d838c90cceb19f70ce47f294a1700f6"
            },
            "downloads": -1,
            "filename": "assemblyai-0.37.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0712baef5acac76a9bd0f4f57f328711",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 49286,
            "upload_time": "2025-02-03T10:08:11",
            "upload_time_iso_8601": "2025-02-03T10:08:11.070464Z",
            "url": "https://files.pythonhosted.org/packages/8f/4a/9dd5d3fc1df248677aa7c7d03dd56806301d55580dbad90d2d3043c80e69/assemblyai-0.37.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-03 10:08:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AssemblyAI",
    "github_project": "assemblyai-python-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "assemblyai"
}
        
Elapsed time: 0.78723s