### Run your IDE as administrator
you will get following error if administrator permission is not there:
**OSError: [WinError 1314] A required privilege is not held by the client**
### Requirements
* Python 3.8 or greater
### GPU execution
GPU execution needs CUDA 11.
GPU execution requires the following NVIDIA libraries to be installed:
* [cuBLAS for CUDA 11](https://developer.nvidia.com/cublas)
* [cuDNN 8 for CUDA 11](https://developer.nvidia.com/cudnn)
There are multiple ways to install these libraries. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below.
### Google Colab:
on google colab run this to install CUDA dependencies:
```
!apt install libcublas11
```
You can see this example [notebook](https://colab.research.google.com/drive/1lpoWrHl5443LSnTG3vJQfTcg9oFiCQSz?usp=sharing)
### installation:
```
pip install speechlib
```
This library does speaker diarization, speaker recognition, and transcription on a single wav file to provide a transcript with actual speaker names. This library will also return an array containing result information. âš™
This library contains following audio preprocessing functions:
1. convert other audio formats to wav
2. convert stereo wav file to mono
3. re-encode the wav file to have 16-bit PCM encoding
Transcriptor method takes 7 arguments.
1. file to transcribe
2. log_folder to store transcription
3. language used for transcribing (language code is used)
4. model size ("tiny", "small", "medium", "large", "large-v1", "large-v2", "large-v3")
5. ACCESS_TOKEN: huggingface acccess token (also get permission to access `pyannote/speaker-diarization@2.1`)
6. voices_folder (contains speaker voice samples for speaker recognition)
7. quantization: this determine whether to use int8 quantization or not. Quantization may speed up the process but lower the accuracy.
voices_folder should contain subfolders named with speaker names. Each subfolder belongs to a speaker and it can contain many voice samples. This will be used for speaker recognition to identify the speaker.
if voices_folder is not provided then speaker tags will be arbitrary.
log_folder is to store the final transcript as a text file.
transcript will also indicate the timeframe in seconds where each speaker speaks.
### Transcription example:
```
import os
from speechlib import Transcriptor
file = "obama_zach.wav" # your audio file
voices_folder = "" # voices folder containing voice samples for recognition
language = "en" # language code
log_folder = "logs" # log folder for storing transcripts
modelSize = "tiny" # size of model to be used [tiny, small, medium, large-v1, large-v2, large-v3]
quantization = False # setting this 'True' may speed up the process but lower the accuracy
ACCESS_TOKEN = "huggingface api key" # get permission to access pyannote/speaker-diarization@2.1 on huggingface
# quantization only works on faster-whisper
transcriptor = Transcriptor(file, log_folder, language, modelSize, ACCESS_TOKEN, voices_folder, quantization)
# use normal whisper
res = transcriptor.whisper()
# use faster-whisper (simply faster)
res = transcriptor.faster_whisper()
# use a custom trained whisper model
res = transcriptor.custom_whisper("D:/whisper_tiny_model/tiny.pt")
# use a huggingface whisper model
res = transcriptor.huggingface_model("Jingmiao/whisper-small-chinese_base")
# use assembly ai model
res = transcriptor.assemby_ai_model("assemblyAI api key")
res --> [["start", "end", "text", "speaker"], ["start", "end", "text", "speaker"]...]
```
#### if you don't want speaker names: keep voices_folder as an empty string ""
start: starting time of speech in seconds
end: ending time of speech in seconds
text: transcribed text for speech during start and end
speaker: speaker of the text
#### voices folder structure:
```
voices_folder
|---> person1
| |---> sample1.wav
| |---> sample2.wav
| ...
|
|---> person2
| |---> sample1.wav
| |---> sample2.wav
| ...
|--> ...
```
supported language codes:
```
"af", "am", "ar", "as", "az", "ba", "be", "bg", "bn", "bo", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fa", "fi", "fo", "fr", "gl", "gu", "ha", "haw", "he", "hi", "hr", "ht", "hu", "hy", "id", "is","it", "ja", "jw", "ka", "kk", "km", "kn", "ko", "la", "lb", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn","mr", "ms", "mt", "my", "ne", "nl", "nn", "no", "oc", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk","sl", "sn", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "uk", "ur", "uz","vi", "yi", "yo", "zh", "yue"
```
supported language names:
```
"Afrikaans", "Amharic", "Arabic", "Assamese", "Azerbaijani", "Bashkir", "Belarusian", "Bulgarian", "Bengali","Tibetan", "Breton", "Bosnian", "Catalan", "Czech", "Welsh", "Danish", "German", "Greek", "English", "Spanish","Estonian", "Basque", "Persian", "Finnish", "Faroese", "French", "Galician", "Gujarati", "Hausa", "Hawaiian","Hebrew", "Hindi", "Croatian", "Haitian", "Hungarian", "Armenian", "Indonesian", "Icelandic", "Italian", "Japanese","Javanese", "Georgian", "Kazakh", "Khmer", "Kannada", "Korean", "Latin", "Luxembourgish", "Lingala", "Lao","Lithuanian", "Latvian", "Malagasy", "Maori", "Macedonian", "Malayalam", "Mongolian", "Marathi", "Malay", "Maltese","Burmese", "Nepali", "Dutch", "Norwegian Nynorsk", "Norwegian", "Occitan", "Punjabi", "Polish", "Pashto","Portuguese", "Romanian", "Russian", "Sanskrit", "Sindhi", "Sinhalese", "Slovak", "Slovenian", "Shona", "Somali","Albanian", "Serbian", "Sundanese", "Swedish", "Swahili", "Tamil", "Telugu", "Tajik", "Thai", "Turkmen", "Tagalog","Turkish", "Tatar", "Ukrainian", "Urdu", "Uzbek", "Vietnamese", "Yiddish", "Yoruba", "Chinese", "Cantonese",
```
### Audio preprocessing example:
```
from speechlib import PreProcessor
file = "obama1.mp3"
#initialize
prep = PreProcessor()
# convert mp3 to wav
wav_file = prep.convert_to_wav(file)
# convert wav file from stereo to mono
prep.convert_to_mono(wav_file)
# re-encode wav file to have 16-bit PCM encoding
prep.re_encode(wav_file)
```
### Performance
```
These metrics are from Google Colab tests.
These metrics do not take into account model download times.
These metrics are done without quantization enabled.
(quantization will make this even faster)
metrics for faster-whisper "tiny" model:
on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 64s
metrics for faster-whisper "small" model:
on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 95s
metrics for faster-whisper "medium" model:
on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 193s
metrics for faster-whisper "large" model:
on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 343s
```
#### why not using pyannote/speaker-diarization-3.1, speechbrain >= 1.0.0, faster-whisper >= 1.0.0:
because older versions give more accurate transcriptions. this was tested.
This library uses following huggingface models:
#### https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb
#### https://huggingface.co/Ransaka/whisper-tiny-sinhala-20k-8k-steps-v2
#### https://huggingface.co/pyannote/speaker-diarization
Raw data
{
"_id": null,
"home_page": "https://github.com/NavodPeiris/speechlib",
"name": "speechlib",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Navod Peiris",
"author_email": "navodpeiris1234@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/2b/a6/37b3b9f332cbeb5244a48e07fcb36df912c5406a62b463dbf92e961cc2f6/speechlib-1.1.10.tar.gz",
"platform": null,
"description": "### Run your IDE as administrator\r\n\r\nyou will get following error if administrator permission is not there:\r\n\r\n**OSError: [WinError 1314] A required privilege is not held by the client**\r\n\r\n### Requirements\r\n\r\n* Python 3.8 or greater\r\n\r\n### GPU execution\r\n\r\nGPU execution needs CUDA 11. \r\n\r\nGPU execution requires the following NVIDIA libraries to be installed:\r\n\r\n* [cuBLAS for CUDA 11](https://developer.nvidia.com/cublas)\r\n* [cuDNN 8 for CUDA 11](https://developer.nvidia.com/cudnn)\r\n\r\nThere are multiple ways to install these libraries. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below.\r\n\r\n### Google Colab:\r\n\r\non google colab run this to install CUDA dependencies:\r\n```\r\n!apt install libcublas11\r\n```\r\n\r\nYou can see this example [notebook](https://colab.research.google.com/drive/1lpoWrHl5443LSnTG3vJQfTcg9oFiCQSz?usp=sharing)\r\n\r\n### installation:\r\n```\r\npip install speechlib\r\n```\r\n\r\nThis library does speaker diarization, speaker recognition, and transcription on a single wav file to provide a transcript with actual speaker names. This library will also return an array containing result information. \u00e2\u0161\u2122 \r\n\r\nThis library contains following audio preprocessing functions:\r\n\r\n1. convert other audio formats to wav\r\n\r\n2. convert stereo wav file to mono\r\n\r\n3. re-encode the wav file to have 16-bit PCM encoding\r\n\r\nTranscriptor method takes 7 arguments. \r\n\r\n1. file to transcribe\r\n\r\n2. log_folder to store transcription\r\n\r\n3. language used for transcribing (language code is used)\r\n\r\n4. model size (\"tiny\", \"small\", \"medium\", \"large\", \"large-v1\", \"large-v2\", \"large-v3\")\r\n\r\n5. ACCESS_TOKEN: huggingface acccess token (also get permission to access `pyannote/speaker-diarization@2.1`)\r\n\r\n6. voices_folder (contains speaker voice samples for speaker recognition)\r\n\r\n7. quantization: this determine whether to use int8 quantization or not. Quantization may speed up the process but lower the accuracy.\r\n\r\nvoices_folder should contain subfolders named with speaker names. Each subfolder belongs to a speaker and it can contain many voice samples. This will be used for speaker recognition to identify the speaker.\r\n\r\nif voices_folder is not provided then speaker tags will be arbitrary.\r\n\r\nlog_folder is to store the final transcript as a text file.\r\n\r\ntranscript will also indicate the timeframe in seconds where each speaker speaks.\r\n\r\n### Transcription example:\r\n\r\n```\r\nimport os\r\nfrom speechlib import Transcriptor\r\n\r\nfile = \"obama_zach.wav\" # your audio file\r\nvoices_folder = \"\" # voices folder containing voice samples for recognition\r\nlanguage = \"en\" # language code\r\nlog_folder = \"logs\" # log folder for storing transcripts\r\nmodelSize = \"tiny\" # size of model to be used [tiny, small, medium, large-v1, large-v2, large-v3]\r\nquantization = False # setting this 'True' may speed up the process but lower the accuracy\r\nACCESS_TOKEN = \"huggingface api key\" # get permission to access pyannote/speaker-diarization@2.1 on huggingface\r\n\r\n# quantization only works on faster-whisper\r\ntranscriptor = Transcriptor(file, log_folder, language, modelSize, ACCESS_TOKEN, voices_folder, quantization)\r\n\r\n# use normal whisper\r\nres = transcriptor.whisper()\r\n\r\n# use faster-whisper (simply faster)\r\nres = transcriptor.faster_whisper()\r\n\r\n# use a custom trained whisper model\r\nres = transcriptor.custom_whisper(\"D:/whisper_tiny_model/tiny.pt\")\r\n\r\n# use a huggingface whisper model\r\nres = transcriptor.huggingface_model(\"Jingmiao/whisper-small-chinese_base\")\r\n\r\n# use assembly ai model\r\nres = transcriptor.assemby_ai_model(\"assemblyAI api key\")\r\n\r\nres --> [[\"start\", \"end\", \"text\", \"speaker\"], [\"start\", \"end\", \"text\", \"speaker\"]...]\r\n```\r\n\r\n#### if you don't want speaker names: keep voices_folder as an empty string \"\"\r\n\r\nstart: starting time of speech in seconds \r\nend: ending time of speech in seconds \r\ntext: transcribed text for speech during start and end \r\nspeaker: speaker of the text \r\n\r\n#### voices folder structure:\r\n```\r\nvoices_folder \r\n|---> person1 \r\n| |---> sample1.wav \r\n| |---> sample2.wav \r\n| ...\r\n|\r\n|---> person2 \r\n| |---> sample1.wav \r\n| |---> sample2.wav \r\n| ...\r\n|--> ... \r\n```\r\n\r\nsupported language codes: \r\n\r\n```\r\n\"af\", \"am\", \"ar\", \"as\", \"az\", \"ba\", \"be\", \"bg\", \"bn\", \"bo\", \"br\", \"bs\", \"ca\", \"cs\", \"cy\", \"da\", \"de\", \"el\", \"en\", \"es\", \"et\", \"eu\", \"fa\", \"fi\", \"fo\", \"fr\", \"gl\", \"gu\", \"ha\", \"haw\", \"he\", \"hi\", \"hr\", \"ht\", \"hu\", \"hy\", \"id\", \"is\",\"it\", \"ja\", \"jw\", \"ka\", \"kk\", \"km\", \"kn\", \"ko\", \"la\", \"lb\", \"ln\", \"lo\", \"lt\", \"lv\", \"mg\", \"mi\", \"mk\", \"ml\", \"mn\",\"mr\", \"ms\", \"mt\", \"my\", \"ne\", \"nl\", \"nn\", \"no\", \"oc\", \"pa\", \"pl\", \"ps\", \"pt\", \"ro\", \"ru\", \"sa\", \"sd\", \"si\", \"sk\",\"sl\", \"sn\", \"so\", \"sq\", \"sr\", \"su\", \"sv\", \"sw\", \"ta\", \"te\", \"tg\", \"th\", \"tk\", \"tl\", \"tr\", \"tt\", \"uk\", \"ur\", \"uz\",\"vi\", \"yi\", \"yo\", \"zh\", \"yue\"\r\n```\r\n\r\nsupported language names:\r\n\r\n```\r\n\"Afrikaans\", \"Amharic\", \"Arabic\", \"Assamese\", \"Azerbaijani\", \"Bashkir\", \"Belarusian\", \"Bulgarian\", \"Bengali\",\"Tibetan\", \"Breton\", \"Bosnian\", \"Catalan\", \"Czech\", \"Welsh\", \"Danish\", \"German\", \"Greek\", \"English\", \"Spanish\",\"Estonian\", \"Basque\", \"Persian\", \"Finnish\", \"Faroese\", \"French\", \"Galician\", \"Gujarati\", \"Hausa\", \"Hawaiian\",\"Hebrew\", \"Hindi\", \"Croatian\", \"Haitian\", \"Hungarian\", \"Armenian\", \"Indonesian\", \"Icelandic\", \"Italian\", \"Japanese\",\"Javanese\", \"Georgian\", \"Kazakh\", \"Khmer\", \"Kannada\", \"Korean\", \"Latin\", \"Luxembourgish\", \"Lingala\", \"Lao\",\"Lithuanian\", \"Latvian\", \"Malagasy\", \"Maori\", \"Macedonian\", \"Malayalam\", \"Mongolian\", \"Marathi\", \"Malay\", \"Maltese\",\"Burmese\", \"Nepali\", \"Dutch\", \"Norwegian Nynorsk\", \"Norwegian\", \"Occitan\", \"Punjabi\", \"Polish\", \"Pashto\",\"Portuguese\", \"Romanian\", \"Russian\", \"Sanskrit\", \"Sindhi\", \"Sinhalese\", \"Slovak\", \"Slovenian\", \"Shona\", \"Somali\",\"Albanian\", \"Serbian\", \"Sundanese\", \"Swedish\", \"Swahili\", \"Tamil\", \"Telugu\", \"Tajik\", \"Thai\", \"Turkmen\", \"Tagalog\",\"Turkish\", \"Tatar\", \"Ukrainian\", \"Urdu\", \"Uzbek\", \"Vietnamese\", \"Yiddish\", \"Yoruba\", \"Chinese\", \"Cantonese\",\r\n```\r\n\r\n### Audio preprocessing example:\r\n\r\n```\r\nfrom speechlib import PreProcessor\r\n\r\nfile = \"obama1.mp3\"\r\n#initialize\r\nprep = PreProcessor()\r\n# convert mp3 to wav\r\nwav_file = prep.convert_to_wav(file) \r\n\r\n# convert wav file from stereo to mono\r\nprep.convert_to_mono(wav_file)\r\n\r\n# re-encode wav file to have 16-bit PCM encoding\r\nprep.re_encode(wav_file)\r\n```\r\n\r\n### Performance\r\n```\r\nThese metrics are from Google Colab tests.\r\nThese metrics do not take into account model download times.\r\nThese metrics are done without quantization enabled.\r\n(quantization will make this even faster)\r\n\r\nmetrics for faster-whisper \"tiny\" model:\r\n on gpu:\r\n audio name: obama_zach.wav\r\n duration: 6 min 36 s\r\n diarization time: 24s\r\n speaker recognition time: 10s\r\n transcription time: 64s\r\n\r\n\r\nmetrics for faster-whisper \"small\" model:\r\n on gpu:\r\n audio name: obama_zach.wav\r\n duration: 6 min 36 s\r\n diarization time: 24s\r\n speaker recognition time: 10s\r\n transcription time: 95s\r\n\r\n\r\nmetrics for faster-whisper \"medium\" model:\r\n on gpu:\r\n audio name: obama_zach.wav\r\n duration: 6 min 36 s\r\n diarization time: 24s\r\n speaker recognition time: 10s\r\n transcription time: 193s\r\n\r\n\r\nmetrics for faster-whisper \"large\" model:\r\n on gpu:\r\n audio name: obama_zach.wav\r\n duration: 6 min 36 s\r\n diarization time: 24s\r\n speaker recognition time: 10s\r\n transcription time: 343s\r\n```\r\n\r\n#### why not using pyannote/speaker-diarization-3.1, speechbrain >= 1.0.0, faster-whisper >= 1.0.0:\r\n\r\nbecause older versions give more accurate transcriptions. this was tested.\r\n\r\nThis library uses following huggingface models:\r\n\r\n#### https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb\r\n#### https://huggingface.co/Ransaka/whisper-tiny-sinhala-20k-8k-steps-v2\r\n#### https://huggingface.co/pyannote/speaker-diarization\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "speechlib is a library that can do speaker diarization, transcription and speaker recognition on an audio file to create transcripts with actual speaker names. This library also contain audio preprocessor functions.",
"version": "1.1.10",
"project_urls": {
"Homepage": "https://github.com/NavodPeiris/speechlib"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1e8d457699337dc40b4f99f26fc1328a63841ba6cab8ef8054f7e8d014fcb481",
"md5": "37077adbd15cf7228544d2bd5f7814e0",
"sha256": "c00b3eadef049da724dd227ac4b7b14451b527b36ef03ef91403ec605171f23b"
},
"downloads": -1,
"filename": "speechlib-1.1.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "37077adbd15cf7228544d2bd5f7814e0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 15236,
"upload_time": "2024-10-09T18:48:39",
"upload_time_iso_8601": "2024-10-09T18:48:39.820790Z",
"url": "https://files.pythonhosted.org/packages/1e/8d/457699337dc40b4f99f26fc1328a63841ba6cab8ef8054f7e8d014fcb481/speechlib-1.1.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2ba637b3b9f332cbeb5244a48e07fcb36df912c5406a62b463dbf92e961cc2f6",
"md5": "ed2b94ff95cbf685bf1bca4aa7063c63",
"sha256": "ec91d6f607fccb3b940b6ca3b43706947b81aabb25cc4c4a6b26b6f6052ae1f9"
},
"downloads": -1,
"filename": "speechlib-1.1.10.tar.gz",
"has_sig": false,
"md5_digest": "ed2b94ff95cbf685bf1bca4aa7063c63",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 15446,
"upload_time": "2024-10-09T18:48:41",
"upload_time_iso_8601": "2024-10-09T18:48:41.461187Z",
"url": "https://files.pythonhosted.org/packages/2b/a6/37b3b9f332cbeb5244a48e07fcb36df912c5406a62b463dbf92e961cc2f6/speechlib-1.1.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-09 18:48:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "NavodPeiris",
"github_project": "speechlib",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "transformers",
"specs": [
[
"<",
"5.0.0"
],
[
">=",
"4.36.2"
]
]
},
{
"name": "torch",
"specs": [
[
"<",
"3.0.0"
],
[
">=",
"2.1.2"
]
]
},
{
"name": "torchaudio",
"specs": [
[
"<",
"3.0.0"
],
[
">=",
"2.1.2"
]
]
},
{
"name": "pydub",
"specs": [
[
">=",
"0.25.1"
],
[
"<",
"1.0.0"
]
]
},
{
"name": "pyannote.audio",
"specs": [
[
">=",
"3.1.1"
],
[
"<",
"4.0.0"
]
]
},
{
"name": "speechbrain",
"specs": [
[
">=",
"0.5.16"
],
[
"<",
"1.0.0"
]
]
},
{
"name": "accelerate",
"specs": [
[
"<",
"1.0.0"
],
[
">=",
"0.26.1"
]
]
},
{
"name": "faster-whisper",
"specs": [
[
">=",
"0.10.1"
],
[
"<",
"1.0.0"
]
]
},
{
"name": "openai-whisper",
"specs": [
[
">=",
"20231117"
],
[
"<",
"20240927"
]
]
}
],
"lcname": "speechlib"
}