# theramin
A video-to-sound theramin.
To install: ```pip install theramin```
# About this project
The objective of this project is to make "instruments" that work like a theramin,
that is, whereby the features of the sounds produced are controlled by the position of
the left and right hand.
That, but generalized.
A theramin is an analog device that whose sound is parametrized by two features only:
Pitch and volume, respectively controlled by the left and right hand; the
distance from the device's sensor determines the pitch and volume of the sound produced.
Here, though, we'll use video as our sensor, so we can not only detect the positions
of the hands, but also the positions of the fingers, and the angles between them,
detect gestures, and so on. Further, we can use the facial expressions to add to
the parametrization of the sound produced.
What the project wants to grow up to be is a platform for creating musical instruments
that are controlled by the body, enabling the user to determine the mapping between
the video stream and the sound produced.
We hope this will enable the creation of new musical instruments that are more
intuitive and expressive than the traditional theramin.
Right now, the project is very much in its infancy, and we're just trying to get the
basics working.
Here's a bit about what exists so far:
* `_03_hand_gesture_recognition_with_video_feedback.py`: Detects hand gestures and overlays
the detected landmarks and categories on the video stream in real-time.
* `_06_hand_gesture_w_slabs.py`: The same, but using slabs, for a better framework setup
* `_05_a_wrist_lines.py`: Draws lines showing the wrist position on the video stream
in real-time.
* `_07_wrist_lines_w_sound.py`: Draws lines showing the wrist position on the video stream
and produces sound based on the wrist position. The resulting sound is horrible though,
due to the lack of a proper real-time setup (I believe). The sound produced is
produced in realtime, but only a small waveform chunk is produced.
It's choppy.
**We need to buffer the stream of sound features and have an independent process that
reads from the buffer and produces the sound, taking more than one chunk of
information to determine the sound produced.**
* `_08_keyboard_theramin.py`: To reduce the complexity of the sound produced, this script
uses a keyboard to produce sound, with the pitch and volume controlled by the
left and right hand, respectively. This doesn't work well at all. More work is needed.
Raw data
{
"_id": null,
"home_page": "https://github.com/thorwhalen/theramin",
"name": "theramin",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Thor Whalen",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/e8/ce/58f27f3909456d3f47d9c45b18e2dbb718ca988c1f8d310e864a2a06ce46/theramin-0.0.5.tar.gz",
"platform": "any",
"description": "# theramin\n\nA video-to-sound theramin. \n\nTo install:\t```pip install theramin```\n\n# About this project\n\n\nThe objective of this project is to make \"instruments\" that work like a theramin, \nthat is, whereby the features of the sounds produced are controlled by the position of \nthe left and right hand. \n\nThat, but generalized. \nA theramin is an analog device that whose sound is parametrized by two features only:\nPitch and volume, respectively controlled by the left and right hand; the \ndistance from the device's sensor determines the pitch and volume of the sound produced.\n\nHere, though, we'll use video as our sensor, so we can not only detect the positions \nof the hands, but also the positions of the fingers, and the angles between them, \ndetect gestures, and so on. Further, we can use the facial expressions to add to \nthe parametrization of the sound produced.\n\nWhat the project wants to grow up to be is a platform for creating musical instruments\nthat are controlled by the body, enabling the user to determine the mapping between\nthe video stream and the sound produced. \nWe hope this will enable the creation of new musical instruments that are more\nintuitive and expressive than the traditional theramin.\n\nRight now, the project is very much in its infancy, and we're just trying to get the\nbasics working.\n\nHere's a bit about what exists so far:\n\n* `_03_hand_gesture_recognition_with_video_feedback.py`: Detects hand gestures and overlays\n the detected landmarks and categories on the video stream in real-time.\n* `_06_hand_gesture_w_slabs.py`: The same, but using slabs, for a better framework setup\n* `_05_a_wrist_lines.py`: Draws lines showing the wrist position on the video stream\n in real-time.\n* `_07_wrist_lines_w_sound.py`: Draws lines showing the wrist position on the video stream\n and produces sound based on the wrist position. The resulting sound is horrible though, \n due to the lack of a proper real-time setup (I believe). The sound produced is \n produced in realtime, but only a small waveform chunk is produced. \n It's choppy.\n **We need to buffer the stream of sound features and have an independent process that\n reads from the buffer and produces the sound, taking more than one chunk of \n information to determine the sound produced.**\n* `_08_keyboard_theramin.py`: To reduce the complexity of the sound produced, this script\n uses a keyboard to produce sound, with the pitch and volume controlled by the \n left and right hand, respectively. This doesn't work well at all. More work is needed.\n\n\n\n\n",
"bugtrack_url": null,
"license": "mit",
"summary": "A video-to-sound theramin",
"version": "0.0.5",
"project_urls": {
"Homepage": "https://github.com/thorwhalen/theramin"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9a54ceafce118504d2e5b3918b3253fff7232ebf0b0c8fe399ac81e5dde5d720",
"md5": "8872663c9aadc8eeaa5793fa137a8130",
"sha256": "47ac42b35349e523ee75b7e3490d93630d2d3626159099605599668d8ddea592"
},
"downloads": -1,
"filename": "theramin-0.0.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8872663c9aadc8eeaa5793fa137a8130",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 6499,
"upload_time": "2024-08-30T08:52:59",
"upload_time_iso_8601": "2024-08-30T08:52:59.577955Z",
"url": "https://files.pythonhosted.org/packages/9a/54/ceafce118504d2e5b3918b3253fff7232ebf0b0c8fe399ac81e5dde5d720/theramin-0.0.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e8ce58f27f3909456d3f47d9c45b18e2dbb718ca988c1f8d310e864a2a06ce46",
"md5": "89bb8523a1b0775684565f7cfa44c457",
"sha256": "23c3681451bc35004313f7753f5d533f7c010186b9b296fbde3b826d886994fd"
},
"downloads": -1,
"filename": "theramin-0.0.5.tar.gz",
"has_sig": false,
"md5_digest": "89bb8523a1b0775684565f7cfa44c457",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 5098,
"upload_time": "2024-08-30T08:53:00",
"upload_time_iso_8601": "2024-08-30T08:53:00.827252Z",
"url": "https://files.pythonhosted.org/packages/e8/ce/58f27f3909456d3f47d9c45b18e2dbb718ca988c1f8d310e864a2a06ce46/theramin-0.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-30 08:53:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "thorwhalen",
"github_project": "theramin",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "theramin"
}