# Speech2Speech
<img src="speech2speech/imgs/speech2speech.png" alt="image of main screen"
height="762" width="723">
The Speech2Speech Python package is a Streamlit Web application that **models
all phases of speech-to-speech translation**, including:
- recording speech in the source language,
- converting the source language speech to source language text,
- translating the source language text to target language text, and
- converting the translated text to speech in the target language.
As a web application, it can be accessed through any web browser and is
**compatible with Linux, Mac, and Windows operating systems**.
Speech2Speech is currently **configured to translate to and from 13 different
languages**. Although the quality of translation may vary depending on the
target language, it is pretty good for popular languages such as English,
French, Portuguese, Spanish, German, Dutch and Italian. Speech2Speech **can be
configured for many more than just these languages** (specified in the config.
ini file), as long as they are supported by Whisper AI, Chat-GPT and gtts,
the packages on which it depends.
Speech2Speech is designed to be accessible to a **broad audience**. One of
the key advantages of Speech2Speech is that it's incredibly easy to use:
- The package **automatically detects the source language used in speech**. The
user therefore is not asked to specify it.
- There is **no need to train the software or the user before actually using
the product**. It works well straight out of the box with no further tuning
or configuration required.
This makes it a highly accessible tool that anyone can use, regardless of
their technical expertise or experience with speech recognition and machine
translation technology.
It is also hoped that this technology could be leveraged to develop
products specifically designed for **persons with visual impairments**. It
can empower them to have texts read aloud or dictate their texts
and listen to them being read out loud before forwarding them to their
intended recipients.
Each phase of the workflow creates a file, whose name is defined in the
config.ini file. Advanced users can **start and/or interrupt the workflow
wherever they need** by inserting their own files in the `speech2speech/data`
subdirectory and adapting the config.ini file to refer to them.
Prerequisites
-----------------------------------------------------------------------------
You need to [get an OpenAI API key](https://www.howtogeek.com/885918/how-to-get-an-openai-api-key/#autotoc_anchor_0) in order to use this app.
Speech2Speech local installation
--------------------------
Run the following command:
pip install speech2speech
In order to launch it locally follow these steps:
1. Make sure the microphone and speakers of your device are on.
2. Navigate to the directory where your Speech2Speech program is located
using the cd command.
3. Type the following command in the terminal to launch Speech2Speech:
`streamlit run speech2speech.py`
Workflow
----------
Here's a step-by-step guide on how to use the full workflow of Speech2Speech:
1. Copy your OpenAI API key and paste it into the text box below the label
"OpenAI API Key". The API key you enter will not be visible on
the screen by default.
2. Click the "Record Audio" button to start recording.
3. Begin speaking or reading aloud. When your dictation is finished, press
CTRL+E to stop recording it. Chat-GPT can
automatically detect the
language you're speaking (as long as it also supports it), so there's no
need to specify it.
4. Click the "Transcribe" button to convert your dictation into text.
5. Select your desired target language from the dropdown menu under "Target
Language".
6. Click the "Translate" button to translate the transcription into your
chosen target language. The translated text will appear on a blue
background after a few seconds.
7. Click the "Read Translation" button to listen to the translated text.
8. If you want to repeat the process with a new dictation, click the "Refresh
Page" button to reset the page.
As indicated above, you can also use just parts of this full workflow by specifying the name(s) of the file(s) you want to use in the config.ini file and by clicking the relevant button of the user interface.
What to do if you encounter issues
-------------------------------
If Chat-GPT or Speech2Speech get stuck or you encounter any issues, simply
refresh the browser page. ChatGPT may, however, have lots of users at certain times
of the day and be poorly responsive for a while.
Raw data
{
"_id": null,
"home_page": "https://github.com/rcdalj/speech2speech",
"name": "speech2speech",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3, <4",
"maintainer_email": "",
"keywords": "speech_recognition,machine_translation,text_to_speech,python-3,chat-gpt,whisper-ai,pyaudio,gtts",
"author": "rcdalj",
"author_email": "rcdalj1 <rcdalj1@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/28/38/abe1df4464c5dbea7dc06bcb721c15a8d31681d1db8013389b16438af552/speech2speech-0.3.0.tar.gz",
"platform": null,
"description": "# Speech2Speech\n<img src=\"speech2speech/imgs/speech2speech.png\" alt=\"image of main screen\" \nheight=\"762\" width=\"723\">\n\nThe Speech2Speech Python package is a Streamlit Web application that **models \nall phases of speech-to-speech translation**, including:\n- recording speech in the source language, \n- converting the source language speech to source language text, \n- translating the source language text to target language text, and \n- converting the translated text to speech in the target language. \n\nAs a web application, it can be accessed through any web browser and is \n**compatible with Linux, Mac, and Windows operating systems**.\n\nSpeech2Speech is currently **configured to translate to and from 13 different \nlanguages**. Although the quality of translation may vary depending on the \ntarget language, it is pretty good for popular languages such as English, \nFrench, Portuguese, Spanish, German, Dutch and Italian. Speech2Speech **can be \nconfigured for many more than just these languages** (specified in the config.\nini file), as long as they are supported by Whisper AI, Chat-GPT and gtts, \nthe packages on which it depends.\n\n\nSpeech2Speech is designed to be accessible to a **broad audience**. One of \nthe key advantages of Speech2Speech is that it's incredibly easy to use:\n- The package **automatically detects the source language used in speech**. The \nuser therefore is not asked to specify it.\n- There is **no need to train the software or the user before actually using \nthe product**. It works well straight out of the box with no further tuning \nor configuration required. \nThis makes it a highly accessible tool that anyone can use, regardless of \ntheir technical expertise or experience with speech recognition and machine \ntranslation technology. \n\nIt is also hoped that this technology could be leveraged to develop \nproducts specifically designed for **persons with visual impairments**. It \ncan empower them to have texts read aloud or dictate their texts \nand listen to them being read out loud before forwarding them to their \nintended recipients.\n\nEach phase of the workflow creates a file, whose name is defined in the \nconfig.ini file. Advanced users can **start and/or interrupt the workflow \nwherever they need** by inserting their own files in the `speech2speech/data` \nsubdirectory and adapting the config.ini file to refer to them. \n\nPrerequisites\n-----------------------------------------------------------------------------\nYou need to [get an OpenAI API key](https://www.howtogeek.com/885918/how-to-get-an-openai-api-key/#autotoc_anchor_0) in order to use this app.\n \nSpeech2Speech local installation\n--------------------------\nRun the following command:\n\n pip install speech2speech\n\nIn order to launch it locally follow these steps:\n\n1. Make sure the microphone and speakers of your device are on.\n\n2. Navigate to the directory where your Speech2Speech program is located \nusing the cd command.\n\n3. Type the following command in the terminal to launch Speech2Speech:\n\n\n `streamlit run speech2speech.py`\n\n\nWorkflow\n----------\nHere's a step-by-step guide on how to use the full workflow of Speech2Speech:\n\n1. Copy your OpenAI API key and paste it into the text box below the label \n \"OpenAI API Key\". The API key you enter will not be visible on \n the screen by default.\n2. Click the \"Record Audio\" button to start recording.\n3. Begin speaking or reading aloud. When your dictation is finished, press \n CTRL+E to stop recording it. Chat-GPT can \n automatically detect the \n language you're speaking (as long as it also supports it), so there's no \n need to specify it.\n4. Click the \"Transcribe\" button to convert your dictation into text.\n5. Select your desired target language from the dropdown menu under \"Target \n Language\".\n6. Click the \"Translate\" button to translate the transcription into your \n chosen target language. The translated text will appear on a blue \n background after a few seconds.\n7. Click the \"Read Translation\" button to listen to the translated text.\n8. If you want to repeat the process with a new dictation, click the \"Refresh \n Page\" button to reset the page.\n \nAs indicated above, you can also use just parts of this full workflow by specifying the name(s) of the file(s) you want to use in the config.ini file and by clicking the relevant button of the user interface.\n\nWhat to do if you encounter issues\n-------------------------------\n\nIf Chat-GPT or Speech2Speech get stuck or you encounter any issues, simply \nrefresh the browser page. ChatGPT may, however, have lots of users at certain times \nof the day and be poorly responsive for a while.\n",
"bugtrack_url": null,
"license": "",
"summary": "Source lang speech to machine translation to target lang speech",
"version": "0.3.0",
"split_keywords": [
"speech_recognition",
"machine_translation",
"text_to_speech",
"python-3",
"chat-gpt",
"whisper-ai",
"pyaudio",
"gtts"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d7b74a247f5b0ad41fbf74efe5c58c20a6d22008338e2154d9029cfde70b5ee3",
"md5": "4d75ffca18fba9036fa78047ceff7ea9",
"sha256": "fa620f032c01d1c9e7ea1b9e9ea708c9dae9924310d5d888ff018cfe31589c9c"
},
"downloads": -1,
"filename": "speech2speech-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4d75ffca18fba9036fa78047ceff7ea9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3, <4",
"size": 3327,
"upload_time": "2023-04-19T09:41:11",
"upload_time_iso_8601": "2023-04-19T09:41:11.448990Z",
"url": "https://files.pythonhosted.org/packages/d7/b7/4a247f5b0ad41fbf74efe5c58c20a6d22008338e2154d9029cfde70b5ee3/speech2speech-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2838abe1df4464c5dbea7dc06bcb721c15a8d31681d1db8013389b16438af552",
"md5": "f0ac05a8f6df1db062b0f92b7489bf7f",
"sha256": "ca2717f94b2b82c1dd0cc7d4bea666e5a5b1fa384dbe4d3a0ed55bf407bd6fc2"
},
"downloads": -1,
"filename": "speech2speech-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "f0ac05a8f6df1db062b0f92b7489bf7f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3, <4",
"size": 6164,
"upload_time": "2023-04-19T09:41:13",
"upload_time_iso_8601": "2023-04-19T09:41:13.210225Z",
"url": "https://files.pythonhosted.org/packages/28/38/abe1df4464c5dbea7dc06bcb721c15a8d31681d1db8013389b16438af552/speech2speech-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-04-19 09:41:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "rcdalj",
"github_project": "speech2speech",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "speech2speech"
}