# LinTO Studio SDK
LinTO Studio SDK is a wrapper around LinTO Studio API. You can generate an auth token from [LinTO Studio](https://studio.linto.ai) in the organization settings.
It is available in Python and Javascript (NodeJS and web browser).
## Install
**Python**
```sh
pip install linto
```
**NodeJS or compiled front-end project**
```
npm install @linto-ai/linto
```
**Plain JS in web browser**
```html
<script type="module" src="https://unpkg.com/@linto-ai/linto/index.js"></script>
```
## How to use
### NodeJS or Web browser
```javascript
// NodeJS
import LinTO from "@linto-ai/linto"
let linTO = new LinTO({
authToken: "authToken",
})
// Browser
let linTO = new window.LinTO({
authToken: "authToken",
})
// Choose depending on your environment
// NodeJS
import fs from "fs"
const file = await fs.openAsBlob("path/to/audio.mp3")
// Browser :
// From an <input type='file'/>
const file = document.getElementById("file").files[0]
const handle = await linTO.transcribe(file)
handle.addEventListener("update", (e) => {
console.log("Audio transcription processing", e.detail)
})
handle.addEventListener("done", (e) => {
console.log("Audio transcription completed")
console.log("Full text", e.detail.fullText)
console.log("Formated output", e.detail.toFormat())
console.log("Turns list", e.detail.turns)
console.log("Api response", e.detail.response)
})
handle.addEventListener("error", () => {
console.log("Error while processing the audio")
})
```
**Full example**
- [NodeJS](javascript/test.js)
- [Browser](javascript/test.html)
### Python
```python
import os
from linto import LinTO
linTO = LinTO(auth_token="auth_token")
with open("path/to/audio.mp3", "rb") as f:
file = f.read()
handle = await linTO.transcribe(file)
def on_update(data):
print("Audio transcription processing", data)
def on_done(data):
print("Audio transcription completed")
print("Full text", data.full_text)
print("Formated output", data.to_format())
print("Turns list", data.turns)
print("Api response", data.response)
def on_error(data):
print("Error while processing the audio")
handle.on("update", on_update)
handle.on("done", on_done)
handle.on("error", on_error)
```
See complete python script at [python/test.js](python/test.py)
## Documentation
_Options in camelCase are the same in pascal_case for Python_
### Initialisation
```javascript
// Javascript
linTO = new LinTO({authToken = "auth_token", ...options})
```
```python
# Python
linTO = LinTO(auth_token="auth_token", **options)
```
#### Options
| Parameter | required | value | description | default value |
| --------- | -------- | ------ | ------------------- | ------------------------------ |
| authToken | yes | String | Studio auth token | |
| baseUrl | no | String | Studio API base url | https://studio.linto.ai/cm-api |
### Transcribe
```javascript
// Javascript
const handle = await linTO.transcribe(file, { ...options })
handle.addEventListener("update", callback)
handle.addEventListener("done", callback)
handle.addEventListener("error", callback)
```
```python
# Python
await linTO.transcribe(file, **options)
handle.on("update", callback)
handle.on("done", callback)
handle.on("error", callback)
```
#### Options
| Parameter | required | value | description | default value |
| ----------------- | -------- | ------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------- |
| file | yes | File or Blob | Audio file to transcribe | |
| enableDiarization | no | Bool | Enable speaker diarization | True |
| numberOfSpeaker | no | Int | Number of speaker for diarization, 0 means auto | 0 |
| language | no | 2 letters language code or "\*" | Language the audio should be transcribed. "\*" means auto-detection + multiple languages | "\*" |
| enablePunctuation | no | Bool | Enable automatic punctuation recognition | True |
| name | no | String | Name of the media in LinTO Studio | "imported file ${date}" |
#### toFormat options
| Parameter | required | value | description | default value |
| -------------- | -------- | ------ | ------------------------------------------------------------------------------------------------------- | ---------------------------------------------- |
| sep | no | String | Separator between metadatas | " - " |
| metaTextSep | no | String | Separator between metadata and text | " : " |
| eol | no | String | End of line character ("CRLF" or "LF" or None). If neither "CRLF" or "LF", no carriage return is added. | "CRLF" |
| ensureFinalEOL | no | Bool | Whether to ensure final end of line | false |
| include | no | Object | Which metadata to include (speaker, lang, timestamp) | { speaker: true, lang: true, timestamp: true } |
| order | no | Array | Order of metadata in output | ["speaker", "lang", "timestamp"] |
## Coming soon 🏗️
### Transcribe video conference
```javascript
const meeting = await linTO.transcribeVideoConference({ type, url, ...options })
meeting.addEventListener("connected", callback)
meeting.addEventListener("meeting_start", callback)
meeting.addEventListener("people_join", callback)
meeting.addEventListener("meeting_end", callback)
meeting.addEventListener("transcription", callback)
handle = await meeting.offlineTranscription({ ...options })
handle.addEventListener("update", callback)
handle.addEventListener("done", callback)
handle.addEventListener("error", callback)
```
### Live transcription
```javascript
const live = await linTO.transcribeLive({ ...option })
live.connectAudio(source)
live.startTranscription()
live.stopTranscription()
live.addEventListener("transcription", callback)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "linto",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "STT, Whisper, Speech-to-Text, Transcript, Transcriber, Timestamps",
"author": null,
"author_email": "Tom Darboux <tdarboux@linagora.com>",
"download_url": "https://files.pythonhosted.org/packages/49/33/f5c2b2616715a1c0d4e817c653b11df48fb7fa7d5e3ecf85d03a82e534f6/linto-1.1.2.tar.gz",
"platform": null,
"description": "# LinTO Studio SDK\n\nLinTO Studio SDK is a wrapper around LinTO Studio API. You can generate an auth token from [LinTO Studio](https://studio.linto.ai) in the organization settings.\n\nIt is available in Python and Javascript (NodeJS and web browser).\n\n## Install\n\n**Python**\n\n```sh\npip install linto\n```\n\n**NodeJS or compiled front-end project**\n\n```\nnpm install @linto-ai/linto\n```\n\n**Plain JS in web browser**\n\n```html\n<script type=\"module\" src=\"https://unpkg.com/@linto-ai/linto/index.js\"></script>\n```\n\n## How to use\n\n### NodeJS or Web browser\n\n```javascript\n// NodeJS\nimport LinTO from \"@linto-ai/linto\"\nlet linTO = new LinTO({\n authToken: \"authToken\",\n})\n\n// Browser\nlet linTO = new window.LinTO({\n authToken: \"authToken\",\n})\n\n// Choose depending on your environment\n\n// NodeJS\nimport fs from \"fs\"\nconst file = await fs.openAsBlob(\"path/to/audio.mp3\")\n\n// Browser :\n// From an <input type='file'/>\nconst file = document.getElementById(\"file\").files[0]\n\nconst handle = await linTO.transcribe(file)\n\nhandle.addEventListener(\"update\", (e) => {\n console.log(\"Audio transcription processing\", e.detail)\n})\n\nhandle.addEventListener(\"done\", (e) => {\n console.log(\"Audio transcription completed\")\n console.log(\"Full text\", e.detail.fullText)\n console.log(\"Formated output\", e.detail.toFormat())\n console.log(\"Turns list\", e.detail.turns)\n console.log(\"Api response\", e.detail.response)\n})\n\nhandle.addEventListener(\"error\", () => {\n console.log(\"Error while processing the audio\")\n})\n```\n\n**Full example**\n\n- [NodeJS](javascript/test.js)\n- [Browser](javascript/test.html)\n\n### Python\n\n```python\nimport os\nfrom linto import LinTO\n\nlinTO = LinTO(auth_token=\"auth_token\")\n\nwith open(\"path/to/audio.mp3\", \"rb\") as f:\n file = f.read()\n\nhandle = await linTO.transcribe(file)\n\ndef on_update(data):\n print(\"Audio transcription processing\", data)\n\ndef on_done(data):\n print(\"Audio transcription completed\")\n print(\"Full text\", data.full_text)\n print(\"Formated output\", data.to_format())\n print(\"Turns list\", data.turns)\n print(\"Api response\", data.response)\n\ndef on_error(data):\n print(\"Error while processing the audio\")\n\nhandle.on(\"update\", on_update)\nhandle.on(\"done\", on_done)\nhandle.on(\"error\", on_error)\n```\n\nSee complete python script at [python/test.js](python/test.py)\n\n## Documentation\n\n_Options in camelCase are the same in pascal_case for Python_\n\n### Initialisation\n\n```javascript\n// Javascript\nlinTO = new LinTO({authToken = \"auth_token\", ...options})\n```\n\n```python\n# Python\nlinTO = LinTO(auth_token=\"auth_token\", **options)\n```\n\n#### Options\n\n| Parameter | required | value | description | default value |\n| --------- | -------- | ------ | ------------------- | ------------------------------ |\n| authToken | yes | String | Studio auth token | |\n| baseUrl | no | String | Studio API base url | https://studio.linto.ai/cm-api |\n\n### Transcribe\n\n```javascript\n// Javascript\nconst handle = await linTO.transcribe(file, { ...options })\n\nhandle.addEventListener(\"update\", callback)\nhandle.addEventListener(\"done\", callback)\nhandle.addEventListener(\"error\", callback)\n```\n\n```python\n# Python\nawait linTO.transcribe(file, **options)\n\nhandle.on(\"update\", callback)\nhandle.on(\"done\", callback)\nhandle.on(\"error\", callback)\n```\n\n#### Options\n\n| Parameter | required | value | description | default value |\n| ----------------- | -------- | ------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------- |\n| file | yes | File or Blob | Audio file to transcribe | |\n| enableDiarization | no | Bool | Enable speaker diarization | True |\n| numberOfSpeaker | no | Int | Number of speaker for diarization, 0 means auto | 0 |\n| language | no | 2 letters language code or \"\\*\" | Language the audio should be transcribed. \"\\*\" means auto-detection + multiple languages | \"\\*\" |\n| enablePunctuation | no | Bool | Enable automatic punctuation recognition | True |\n| name | no | String | Name of the media in LinTO Studio | \"imported file ${date}\" |\n\n#### toFormat options\n\n| Parameter | required | value | description | default value |\n| -------------- | -------- | ------ | ------------------------------------------------------------------------------------------------------- | ---------------------------------------------- |\n| sep | no | String | Separator between metadatas | \" - \" |\n| metaTextSep | no | String | Separator between metadata and text | \" : \" |\n| eol | no | String | End of line character (\"CRLF\" or \"LF\" or None). If neither \"CRLF\" or \"LF\", no carriage return is added. | \"CRLF\" |\n| ensureFinalEOL | no | Bool | Whether to ensure final end of line | false |\n| include | no | Object | Which metadata to include (speaker, lang, timestamp) | { speaker: true, lang: true, timestamp: true } |\n| order | no | Array | Order of metadata in output | [\"speaker\", \"lang\", \"timestamp\"] |\n\n## Coming soon \ud83c\udfd7\ufe0f\n\n### Transcribe video conference\n\n```javascript\nconst meeting = await linTO.transcribeVideoConference({ type, url, ...options })\nmeeting.addEventListener(\"connected\", callback)\nmeeting.addEventListener(\"meeting_start\", callback)\nmeeting.addEventListener(\"people_join\", callback)\nmeeting.addEventListener(\"meeting_end\", callback)\nmeeting.addEventListener(\"transcription\", callback)\n\nhandle = await meeting.offlineTranscription({ ...options })\nhandle.addEventListener(\"update\", callback)\nhandle.addEventListener(\"done\", callback)\nhandle.addEventListener(\"error\", callback)\n```\n\n### Live transcription\n\n```javascript\nconst live = await linTO.transcribeLive({ ...option })\nlive.connectAudio(source)\nlive.startTranscription()\nlive.stopTranscription()\nlive.addEventListener(\"transcription\", callback)\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Wrapper around LinTO Studio API",
"version": "1.1.2",
"project_urls": {
"Homepage": "https://linto.ai",
"Repository": "https://github.com/linto-ai/linto-studio"
},
"split_keywords": [
"stt",
" whisper",
" speech-to-text",
" transcript",
" transcriber",
" timestamps"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "57674d5f72ac647d11f197f274ee3e58bb6a444a95758a41a7256074fc82691d",
"md5": "40f32cde9416334659a83bc066e3c776",
"sha256": "05bc65dfb25416e2525922b7a81ce541f5df546ec0b112505d3e7260cbcb1c93"
},
"downloads": -1,
"filename": "linto-1.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "40f32cde9416334659a83bc066e3c776",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 21757,
"upload_time": "2025-10-07T16:32:03",
"upload_time_iso_8601": "2025-10-07T16:32:03.344872Z",
"url": "https://files.pythonhosted.org/packages/57/67/4d5f72ac647d11f197f274ee3e58bb6a444a95758a41a7256074fc82691d/linto-1.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4933f5c2b2616715a1c0d4e817c653b11df48fb7fa7d5e3ecf85d03a82e534f6",
"md5": "b0a62e5c80c24fa0faa665ee9ba659c3",
"sha256": "5074a939fc96bc77bc2565c1e586602a86bf86a8276019841b40156e9fce18c9"
},
"downloads": -1,
"filename": "linto-1.1.2.tar.gz",
"has_sig": false,
"md5_digest": "b0a62e5c80c24fa0faa665ee9ba659c3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 22180,
"upload_time": "2025-10-07T16:32:04",
"upload_time_iso_8601": "2025-10-07T16:32:04.400733Z",
"url": "https://files.pythonhosted.org/packages/49/33/f5c2b2616715a1c0d4e817c653b11df48fb7fa7d5e3ecf85d03a82e534f6/linto-1.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-07 16:32:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "linto-ai",
"github_project": "linto-studio",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "linto"
}