rev-ai


Namerev-ai JSON
Version 2.19.4 PyPI version JSON
download
home_pagehttps://github.com/revdotcom/revai-python-sdk
SummaryRev AI makes speech applications easy to build!
upload_time2024-01-04 21:23:25
maintainer
docs_urlNone
authorRev Ai
requires_python
licenseMIT license
keywords rev_ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Rev AI Python SDK

[![CI](https://github.com/revdotcom/revai-python-sdk/actions/workflows/build_test.yml/badge.svg)](https://github.com/revdotcom/revai-python-sdk/actions/workflows/build_test.yml)

## Documentation

See the [API docs](https://docs.rev.ai/sdk/python/) for more information about the API and
more python examples.

## Installation

You don't need this source code unless you want to modify the package. If you just
want to use the package, just run:

    pip install --upgrade rev_ai

Install from source with:

    python setup.py install

### Requirements

- Python 3.8+

## Usage

All you need to get started is your Access Token, which can be generated on
your [AccessToken Settings Page](https://www.rev.ai/access_token). Create a client with the
generated Access Token:

```python
from rev_ai import apiclient

# create your client
client = apiclient.RevAiAPIClient("ACCESS TOKEN")
```

### Sending a file

Once you've set up your client with your Access Token sending a file is easy!

```python
# you can send a local file
job = client.submit_job_local_file("FILE PATH")

# or send a link to the file you want transcribed
job = client.submit_job_url("https://example.com/file-to-transcribe.mp3")
```

`job` will contain all the information normally found in a successful response from our
[Submit Job](https://docs.rev.ai/api/asynchronous/reference/#operation/SubmitTranscriptionJob) endpoint.

If you want to get fancy, both submit job methods take `metadata`,`notification_config`, 
`skip_diarization`, `skip_punctuation`, `speaker_channels_count`,`custom_vocabularies`,
`filter_profanity`, `remove_disfluencies`, `delete_after_seconds`,`language`,
and `custom_vocabulary_id` as optional parameters.

The url submission option also supports authentication headers by using the `source_config` option.

You can request transcript summary.

```python
# submitting a human transcription jobs
job = client.submit_job_url("https://example.com/file-to-transcribe.mp3",
    language='en',
    summarization_config=SummarizationOptions(
        formatting_type=SummarizationFormattingOptions.BULLETS
    ))
```

You can request transcript translation into up to five languages.

```javascript
job = client.submit_job_url("https://example.com/file-to-transcribe.mp3",
    language='en',
    translation_config=TranslationOptions(
        target_languages: [
            TranslationLanguageOptions("es", TranslationModel.PREMIUM),
            TranslationLanguageOptions("de")
        ]
    ));
```

All options are described in the request body of the
[Submit Job](https://docs.rev.ai/api/asynchronous/reference/#operation/SubmitTranscriptionJob) endpoint.

### Human Transcription

If you want transcription to be performed by a human, both methods allow you to submit human transcription jobs
using `transcriber=human` with `verbatim`, `rush`, `segments_to_transcribe` and `test_mode` as optional parameters.
Check out our documentation for [Human Transcription](https://docs.rev.ai/api/asynchronous/transcribers/#human-transcription) for more details.

```python
# submitting a human transcription jobs
job = client.submit_job_url("https://example.com/file-to-transcribe.mp3",
    transcriber='human',
    verbatim=False,
    rush=False,
    test_mode=True
    segments_to_transcribe=[{
        start: 2.0,
        end: 4.5
    }])
```

### Checking your file's status

You can check the status of your transcription job using its `id`

```python
job_details = client.get_job_details(job.id)
```

`job_details` will contain all information normally found in a successful response from
our [Get Job](https://docs.rev.ai/api/asynchronous/reference/#operation/GetJobById) endpoint

### Checking multiple files

You can retrieve a list of transcription jobs with optional parameters

```python
jobs = client.get_list_of_jobs()

# limit amount of retrieved jobs
jobs = client.get_list_of_jobs(limits=3)

# get jobs starting after a certain job id
jobs = client.get_list_of_jobs(starting_after='Umx5c6F7pH7r')
```

`jobs` will contain a list of job details having all information normally found in a successful response
from our [Get List of Jobs](https://docs.rev.ai/api/asynchronous/reference/#operation/GetListOfJobs) endpoint

### Deleting a job

You can delete a transcription job using its `id`

```python
client.delete_job(job.id)
```

 All data related to the job, such as input media and transcript, will be permanently deleted.
 A job can only by deleted once it's completed (either with success or failure).

### Getting your transcript

Once your file is transcribed, you can get your transcript in a few different forms:

```python
# as text
transcript_text = client.get_transcript_text(job.id)

# as json
transcript_json = client.get_transcript_json(job.id)

# or as a python object
transcript_object = client.get_transcript_object(job.id)

# or if you requested transcript translation(s)
transcript_object = client.get_translated_transcript_object(job.id,'es')
```

Both the json and object forms contain all the formation outlined in the response
of the [Get Transcript](https://docs.rev.ai/api/asynchronous/reference/#operation/GetTranscriptById) endpoint
when using the json response schema. While the text output is a string containing
just the text of your transcript

### Getting transcript summary

If you requested transcript summary, you can retrieve it as plain text or structured object:

```python
# as text
summary = client.get_transcript_summary_text(job.id)

# as json
summary = client.get_transcript_summary_json(job.id)

# or as a python object
summary = client.get_transcript_summary_object(job.id)

```
### Getting captions output

You can also get captions output from the SDK. We offer both SRT and VTT caption formats.
If you submitted your job as speaker channel audio then you must also provide a `channel_id` to be captioned:

```python
captions = client.get_captions(job.id, content_type=CaptionType.SRT, channel_id=None)

# or if you requested transcript translation(s)
captions = client.get_translated_captions(job.id, 'es')

```

### Streamed outputs

Any output format can be retrieved as a stream. In these cases we return the raw http response to you. The output can be retrieved via `response.content`, `response.iter_lines()` or `response.iter_content()`.

```python
text_stream = client.get_transcript_text_as_stream(job.id)

json_stream = client.get_transcript_json_as_stream(job.id)

captions_stream = client.get_captions_as_stream(job.id)
```

## Streaming audio

In order to stream audio, you will need to setup a streaming client and a media configuration for the audio you will be sending.

```python
from rev_ai.streamingclient import RevAiStreamingClient
from rev_ai.models import MediaConfig

#on_error(error)
#on_close(code, reason)
#on_connected(id)

config = MediaConfig()
streaming_client = RevAiStreamingClient("ACCESS TOKEN",
                                        config,
                                        on_error=ERRORFUNC,
                                        on_close=CLOSEFUNC,
                                        on_connected=CONNECTEDFUNC)
```

`on_error`, `on_close`, and `on_connected` are optional parameters that are functions to be called when the websocket errors, closes, and connects respectively. The default `on_error` raises the error, `on_close` prints out the code and reason for closing, and `on_connected` prints out the job ID.
If passing in custom functions, make sure you provide the right parameters. See the sample code for the parameters.

Once you have a streaming client setup with a `MediaConfig` and access token, you can obtain a transcription generator of your audio. You can also use a custom vocabulary with your streaming job by supplying the optional `custom_vocabulary_id` when starting a connection!

More optional parameters can be supplied when starting a connection, these are `metadata`, `filter_profanity`, `remove_disfluencies`, `delete_after_seconds`, and `detailed_partials`. For a description of these optional parameters look at our [streaming documentation](https://docs.rev.ai/api/streaming/requests/#request-parameters).

```python
response_generator = streaming_client.start(AUDIO_GENERATOR, custom_vocabulary_id="CUSTOM VOCAB ID")
```

`response_generator` is a generator object that yields the transcription results of the audio including partial and final transcriptions. The `start` method creates a thread sending audio pieces from the `AUDIO_GENERATOR` to our
[streaming] endpoint.

If you want to end the connection early, you can!

```python
streaming_client.end()
```

Otherwise, the connection will end when the server obtains an "EOS" message.

### Submitting custom vocabularies

In addition to passing custom vocabularies as parameters in the async API client, you can create and submit your custom vocabularies independently and directly to the custom vocabularies API, as well as check on their progress.

Primarily, the custom vocabularies client allows you to submit and preprocess vocabularies for use with the streaming client, in order to have streaming jobs with custom vocabularies!

In this example you see how to construct custom vocabulary objects, submit them to the API, and check on their progress and metadata!

```python
from rev_ai import custom_vocabularies_client
from rev_ai.models import CustomVocabulary

# Create a client
client = custom_vocabularies_client.RevAiCustomVocabulariesClient("ACCESS TOKEN")

# Construct a CustomVocabulary object using your desired phrases
custom_vocabulary = CustomVocabulary(["Patrick Henry Winston", "Robert C Berwick", "Noam Chomsky"])

# Submit the CustomVocabulary
custom_vocabularies_job = client.submit_custom_vocabularies([custom_vocabulary])

# View the job's progress
job_state = client.get_custom_vocabularies_information(custom_vocabularies_job['id'])

# Get list of previously submitted custom vocabularies
custom_vocabularies_jobs = client.get_list_of_custom_vocabularies()

# Delete the CustomVocabulary
client.delete_custom_vocabulary(custom_vocabularies_job['id'])
```

For more details, check out the custom vocabularies example in our [examples](https://github.com/revdotcom/revai-python-sdk/tree/develop/examples).

# For Rev AI Python SDK Developers

Remember in your development to follow the PEP8 style guide. Your code editor likely has Python PEP8 linting packages which can assist you in your development.

# Local testing instructions

Prequisites: virtualenv, tox

To test locally use the following commands from the repo root

    virtualenv ./sdk-test
    . ./sdk-test/bin/activate
    tox

This will locally run the test suite, and saves significant dev time over
waiting for the CI tool to pick it up.


=======
History
=======

0.0.0 (2018-09-28)
------------------

* Initial alpha release

2.1.0
------------------

* Revamped official release

2.1.1
------------------

* File upload bug fixes

2.2.1
------------------

* Better Documentation

2.2.2
------------------

* Fix pypi readme formatting

2.3.0
------------------

* Add get_list_of_jobs

2.4.0
------------------

* Add support for custom vocabularies

2.5.0
------------------

* Add examples
* Improve error handling
* Add streaming client

2.6.0
------------------

* Support skip_punctuation
* Support .vtt captions output
* Support speaker channel jobs

2.6.1
------------------

* Add metadata to streaming client

2.7.0
------------------

* Add custom vocabularies to streaming client

2.7.1
------------------

* Use v1 of the streaming api
* Add custom vocabulary to async example
* Add filter_profanity to async and streaming clients, examples, and documentation
* Add remove_disfluencies to async client

2.11.0
------------------

* Add language selection option for multi-lingual ASR jobs to async client

2.12.0
------------------

* Add custom_vocabulary_id to async client

2.13.0
------------------
* Add detailed_partials to streaming client
* Switch to Github Actions for automated testing

2.14.0
------------------
* Add transcriber to async client
* Add verbatim, rush, segments_to_transcribe, test_mode to async client for human transcription
* Add start_ts and transcriber to streaming client

2.15.0
------------------
* Add topic extraction client
* Add speaker_names to async client for human transcription

2.16.0
------------------
* Add sentiment analysis client
* Add source_config and notification_config job options to support customer provided urls with authentication headers
* Deprecate media_url option, replace with source_config
* Deprecate callback_url option, replace with notification_config

2.17.0
------------------
* Add language to the streaming client

2.18.0
------------------
* Add atmospherics and speaker_count support
* Deprecated support for Python versions up to 3.8

2.19.0
------------------
* Add async translation and summarization

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/revdotcom/revai-python-sdk",
    "name": "rev-ai",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "rev_ai",
    "author": "Rev Ai",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/a3/e8/c175b33c4587e7e4400b2a2e431d7b8827c7abe0d56f0dca4376e0006cc8/rev_ai-2.19.4.tar.gz",
    "platform": null,
    "description": "# Rev AI Python SDK\n\n[![CI](https://github.com/revdotcom/revai-python-sdk/actions/workflows/build_test.yml/badge.svg)](https://github.com/revdotcom/revai-python-sdk/actions/workflows/build_test.yml)\n\n## Documentation\n\nSee the [API docs](https://docs.rev.ai/sdk/python/) for more information about the API and\nmore python examples.\n\n## Installation\n\nYou don't need this source code unless you want to modify the package. If you just\nwant to use the package, just run:\n\n    pip install --upgrade rev_ai\n\nInstall from source with:\n\n    python setup.py install\n\n### Requirements\n\n- Python 3.8+\n\n## Usage\n\nAll you need to get started is your Access Token, which can be generated on\nyour [AccessToken Settings Page](https://www.rev.ai/access_token). Create a client with the\ngenerated Access Token:\n\n```python\nfrom rev_ai import apiclient\n\n# create your client\nclient = apiclient.RevAiAPIClient(\"ACCESS TOKEN\")\n```\n\n### Sending a file\n\nOnce you've set up your client with your Access Token sending a file is easy!\n\n```python\n# you can send a local file\njob = client.submit_job_local_file(\"FILE PATH\")\n\n# or send a link to the file you want transcribed\njob = client.submit_job_url(\"https://example.com/file-to-transcribe.mp3\")\n```\n\n`job` will contain all the information normally found in a successful response from our\n[Submit Job](https://docs.rev.ai/api/asynchronous/reference/#operation/SubmitTranscriptionJob) endpoint.\n\nIf you want to get fancy, both submit job methods take `metadata`,`notification_config`, \n`skip_diarization`, `skip_punctuation`, `speaker_channels_count`,`custom_vocabularies`,\n`filter_profanity`, `remove_disfluencies`, `delete_after_seconds`,`language`,\nand `custom_vocabulary_id` as optional parameters.\n\nThe url submission option also supports authentication headers by using the `source_config` option.\n\nYou can request transcript summary.\n\n```python\n# submitting a human transcription jobs\njob = client.submit_job_url(\"https://example.com/file-to-transcribe.mp3\",\n    language='en',\n    summarization_config=SummarizationOptions(\n        formatting_type=SummarizationFormattingOptions.BULLETS\n    ))\n```\n\nYou can request transcript translation into up to five languages.\n\n```javascript\njob = client.submit_job_url(\"https://example.com/file-to-transcribe.mp3\",\n    language='en',\n    translation_config=TranslationOptions(\n        target_languages: [\n            TranslationLanguageOptions(\"es\", TranslationModel.PREMIUM),\n            TranslationLanguageOptions(\"de\")\n        ]\n    ));\n```\n\nAll options are described in the request body of the\n[Submit Job](https://docs.rev.ai/api/asynchronous/reference/#operation/SubmitTranscriptionJob) endpoint.\n\n### Human Transcription\n\nIf you want transcription to be performed by a human, both methods allow you to submit human transcription jobs\nusing `transcriber=human` with `verbatim`, `rush`, `segments_to_transcribe` and `test_mode` as optional parameters.\nCheck out our documentation for [Human Transcription](https://docs.rev.ai/api/asynchronous/transcribers/#human-transcription) for more details.\n\n```python\n# submitting a human transcription jobs\njob = client.submit_job_url(\"https://example.com/file-to-transcribe.mp3\",\n    transcriber='human',\n    verbatim=False,\n    rush=False,\n    test_mode=True\n    segments_to_transcribe=[{\n        start: 2.0,\n        end: 4.5\n    }])\n```\n\n### Checking your file's status\n\nYou can check the status of your transcription job using its `id`\n\n```python\njob_details = client.get_job_details(job.id)\n```\n\n`job_details` will contain all information normally found in a successful response from\nour [Get Job](https://docs.rev.ai/api/asynchronous/reference/#operation/GetJobById) endpoint\n\n### Checking multiple files\n\nYou can retrieve a list of transcription jobs with optional parameters\n\n```python\njobs = client.get_list_of_jobs()\n\n# limit amount of retrieved jobs\njobs = client.get_list_of_jobs(limits=3)\n\n# get jobs starting after a certain job id\njobs = client.get_list_of_jobs(starting_after='Umx5c6F7pH7r')\n```\n\n`jobs` will contain a list of job details having all information normally found in a successful response\nfrom our [Get List of Jobs](https://docs.rev.ai/api/asynchronous/reference/#operation/GetListOfJobs) endpoint\n\n### Deleting a job\n\nYou can delete a transcription job using its `id`\n\n```python\nclient.delete_job(job.id)\n```\n\n All data related to the job, such as input media and transcript, will be permanently deleted.\n A job can only by deleted once it's completed (either with success or failure).\n\n### Getting your transcript\n\nOnce your file is transcribed, you can get your transcript in a few different forms:\n\n```python\n# as text\ntranscript_text = client.get_transcript_text(job.id)\n\n# as json\ntranscript_json = client.get_transcript_json(job.id)\n\n# or as a python object\ntranscript_object = client.get_transcript_object(job.id)\n\n# or if you requested transcript translation(s)\ntranscript_object = client.get_translated_transcript_object(job.id,'es')\n```\n\nBoth the json and object forms contain all the formation outlined in the response\nof the [Get Transcript](https://docs.rev.ai/api/asynchronous/reference/#operation/GetTranscriptById) endpoint\nwhen using the json response schema. While the text output is a string containing\njust the text of your transcript\n\n### Getting transcript summary\n\nIf you requested transcript summary, you can retrieve it as plain text or structured object:\n\n```python\n# as text\nsummary = client.get_transcript_summary_text(job.id)\n\n# as json\nsummary = client.get_transcript_summary_json(job.id)\n\n# or as a python object\nsummary = client.get_transcript_summary_object(job.id)\n\n```\n### Getting captions output\n\nYou can also get captions output from the SDK. We offer both SRT and VTT caption formats.\nIf you submitted your job as speaker channel audio then you must also provide a `channel_id` to be captioned:\n\n```python\ncaptions = client.get_captions(job.id, content_type=CaptionType.SRT, channel_id=None)\n\n# or if you requested transcript translation(s)\ncaptions = client.get_translated_captions(job.id, 'es')\n\n```\n\n### Streamed outputs\n\nAny output format can be retrieved as a stream. In these cases we return the raw http response to you. The output can be retrieved via `response.content`, `response.iter_lines()` or `response.iter_content()`.\n\n```python\ntext_stream = client.get_transcript_text_as_stream(job.id)\n\njson_stream = client.get_transcript_json_as_stream(job.id)\n\ncaptions_stream = client.get_captions_as_stream(job.id)\n```\n\n## Streaming audio\n\nIn order to stream audio, you will need to setup a streaming client and a media configuration for the audio you will be sending.\n\n```python\nfrom rev_ai.streamingclient import RevAiStreamingClient\nfrom rev_ai.models import MediaConfig\n\n#on_error(error)\n#on_close(code, reason)\n#on_connected(id)\n\nconfig = MediaConfig()\nstreaming_client = RevAiStreamingClient(\"ACCESS TOKEN\",\n                                        config,\n                                        on_error=ERRORFUNC,\n                                        on_close=CLOSEFUNC,\n                                        on_connected=CONNECTEDFUNC)\n```\n\n`on_error`, `on_close`, and `on_connected` are optional parameters that are functions to be called when the websocket errors, closes, and connects respectively. The default `on_error` raises the error, `on_close` prints out the code and reason for closing, and `on_connected` prints out the job ID.\nIf passing in custom functions, make sure you provide the right parameters. See the sample code for the parameters.\n\nOnce you have a streaming client setup with a `MediaConfig` and access token, you can obtain a transcription generator of your audio. You can also use a custom vocabulary with your streaming job by supplying the optional `custom_vocabulary_id` when starting a connection!\n\nMore optional parameters can be supplied when starting a connection, these are `metadata`, `filter_profanity`, `remove_disfluencies`, `delete_after_seconds`, and `detailed_partials`. For a description of these optional parameters look at our [streaming documentation](https://docs.rev.ai/api/streaming/requests/#request-parameters).\n\n```python\nresponse_generator = streaming_client.start(AUDIO_GENERATOR, custom_vocabulary_id=\"CUSTOM VOCAB ID\")\n```\n\n`response_generator` is a generator object that yields the transcription results of the audio including partial and final transcriptions. The `start` method creates a thread sending audio pieces from the `AUDIO_GENERATOR` to our\n[streaming] endpoint.\n\nIf you want to end the connection early, you can!\n\n```python\nstreaming_client.end()\n```\n\nOtherwise, the connection will end when the server obtains an \"EOS\" message.\n\n### Submitting custom vocabularies\n\nIn addition to passing custom vocabularies as parameters in the async API client, you can create and submit your custom vocabularies independently and directly to the custom vocabularies API, as well as check on their progress.\n\nPrimarily, the custom vocabularies client allows you to submit and preprocess vocabularies for use with the streaming client, in order to have streaming jobs with custom vocabularies!\n\nIn this example you see how to construct custom vocabulary objects, submit them to the API, and check on their progress and metadata!\n\n```python\nfrom rev_ai import custom_vocabularies_client\nfrom rev_ai.models import CustomVocabulary\n\n# Create a client\nclient = custom_vocabularies_client.RevAiCustomVocabulariesClient(\"ACCESS TOKEN\")\n\n# Construct a CustomVocabulary object using your desired phrases\ncustom_vocabulary = CustomVocabulary([\"Patrick Henry Winston\", \"Robert C Berwick\", \"Noam Chomsky\"])\n\n# Submit the CustomVocabulary\ncustom_vocabularies_job = client.submit_custom_vocabularies([custom_vocabulary])\n\n# View the job's progress\njob_state = client.get_custom_vocabularies_information(custom_vocabularies_job['id'])\n\n# Get list of previously submitted custom vocabularies\ncustom_vocabularies_jobs = client.get_list_of_custom_vocabularies()\n\n# Delete the CustomVocabulary\nclient.delete_custom_vocabulary(custom_vocabularies_job['id'])\n```\n\nFor more details, check out the custom vocabularies example in our [examples](https://github.com/revdotcom/revai-python-sdk/tree/develop/examples).\n\n# For Rev AI Python SDK Developers\n\nRemember in your development to follow the PEP8 style guide. Your code editor likely has Python PEP8 linting packages which can assist you in your development.\n\n# Local testing instructions\n\nPrequisites: virtualenv, tox\n\nTo test locally use the following commands from the repo root\n\n    virtualenv ./sdk-test\n    . ./sdk-test/bin/activate\n    tox\n\nThis will locally run the test suite, and saves significant dev time over\nwaiting for the CI tool to pick it up.\n\n\n=======\nHistory\n=======\n\n0.0.0 (2018-09-28)\n------------------\n\n* Initial alpha release\n\n2.1.0\n------------------\n\n* Revamped official release\n\n2.1.1\n------------------\n\n* File upload bug fixes\n\n2.2.1\n------------------\n\n* Better Documentation\n\n2.2.2\n------------------\n\n* Fix pypi readme formatting\n\n2.3.0\n------------------\n\n* Add get_list_of_jobs\n\n2.4.0\n------------------\n\n* Add support for custom vocabularies\n\n2.5.0\n------------------\n\n* Add examples\n* Improve error handling\n* Add streaming client\n\n2.6.0\n------------------\n\n* Support skip_punctuation\n* Support .vtt captions output\n* Support speaker channel jobs\n\n2.6.1\n------------------\n\n* Add metadata to streaming client\n\n2.7.0\n------------------\n\n* Add custom vocabularies to streaming client\n\n2.7.1\n------------------\n\n* Use v1 of the streaming api\n* Add custom vocabulary to async example\n* Add filter_profanity to async and streaming clients, examples, and documentation\n* Add remove_disfluencies to async client\n\n2.11.0\n------------------\n\n* Add language selection option for multi-lingual ASR jobs to async client\n\n2.12.0\n------------------\n\n* Add custom_vocabulary_id to async client\n\n2.13.0\n------------------\n* Add detailed_partials to streaming client\n* Switch to Github Actions for automated testing\n\n2.14.0\n------------------\n* Add transcriber to async client\n* Add verbatim, rush, segments_to_transcribe, test_mode to async client for human transcription\n* Add start_ts and transcriber to streaming client\n\n2.15.0\n------------------\n* Add topic extraction client\n* Add speaker_names to async client for human transcription\n\n2.16.0\n------------------\n* Add sentiment analysis client\n* Add source_config and notification_config job options to support customer provided urls with authentication headers\n* Deprecate media_url option, replace with source_config\n* Deprecate callback_url option, replace with notification_config\n\n2.17.0\n------------------\n* Add language to the streaming client\n\n2.18.0\n------------------\n* Add atmospherics and speaker_count support\n* Deprecated support for Python versions up to 3.8\n\n2.19.0\n------------------\n* Add async translation and summarization\n",
    "bugtrack_url": null,
    "license": "MIT license",
    "summary": "Rev AI makes speech applications easy to build!",
    "version": "2.19.4",
    "project_urls": {
        "Homepage": "https://github.com/revdotcom/revai-python-sdk"
    },
    "split_keywords": [
        "rev_ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7cfff226626b727e5da6e2ae88292c0ad1941446e23360da62130cf7d9e55e67",
                "md5": "4fce9106dca838437d6c5a1559b88231",
                "sha256": "01a835aedad5f82f28e95eb32e5dd747d73fd3016aa64f2ff713aaafc599d2ae"
            },
            "downloads": -1,
            "filename": "rev_ai-2.19.4-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4fce9106dca838437d6c5a1559b88231",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 42372,
            "upload_time": "2024-01-04T21:23:23",
            "upload_time_iso_8601": "2024-01-04T21:23:23.801664Z",
            "url": "https://files.pythonhosted.org/packages/7c/ff/f226626b727e5da6e2ae88292c0ad1941446e23360da62130cf7d9e55e67/rev_ai-2.19.4-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a3e8c175b33c4587e7e4400b2a2e431d7b8827c7abe0d56f0dca4376e0006cc8",
                "md5": "c42ff30a9384c77ec183570373e66580",
                "sha256": "8bdd5d507b51f584f66c1a6b45d15f052a57f2e70f3c3a380b31373607e1dcff"
            },
            "downloads": -1,
            "filename": "rev_ai-2.19.4.tar.gz",
            "has_sig": false,
            "md5_digest": "c42ff30a9384c77ec183570373e66580",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 43317,
            "upload_time": "2024-01-04T21:23:25",
            "upload_time_iso_8601": "2024-01-04T21:23:25.527464Z",
            "url": "https://files.pythonhosted.org/packages/a3/e8/c175b33c4587e7e4400b2a2e431d7b8827c7abe0d56f0dca4376e0006cc8/rev_ai-2.19.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-04 21:23:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "revdotcom",
    "github_project": "revai-python-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "tox": true,
    "lcname": "rev-ai"
}
        
Elapsed time: 0.15884s