py-isaac


Namepy-isaac JSON
Version 0.1.3 PyPI version JSON
download
home_pagehttps://github.com/n1teshy/py-isaac
SummaryCooles locally run AI assistant
upload_time2025-02-15 01:09:48
maintainerNone
docs_urlNone
authorNitesh Yadav
requires_python>=3.9
licenseNone
keywords ai assistant local assistant
VCS
bugtrack_url
requirements yapper-tts pyreadline3 py-listener psutil rich
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## I.S.A.A.C - Intelligent System for Advanced Assistance and Companionship

![Demo Image](https://github.com/n1teshy/py-isaac/blob/main/images/1.png)

I.S.A.A.C is a completely local, on-terminal AI assistant that lets you use ChatGPT-like features on the terminal so you don't have to switch windows every 2 minutes, it comes with a set of commands and features that you can turn on or off to get the most out of it, I.S.A.A.C can talk to you using locally run speech-to-text and text-to-speech models allowing you to put your fingers to better use.

- Run `pip install py-isaac`.
- Run `isaac`.
- Type `:commands` to list all available commands.
- Type `hello` and you will be prompted to choose a language model provider.
- Select a provider from [Gemini](https://gemini.google.com/) and [Groq](https://console.groq.com/).
- Now, you will be prompted for an API key.
- Create a [Gemini API key](https://ai.google.dev/gemini-api/docs/api-key) if you selected Gemini or a [Groq API key](https://console.groq.com/keys) if you selected Groq.
- Paste the API key to the prompt.
- Done.


### Available commands
- `:toggle`- toggles features on/off.
- - `:toggle speech` to toggle the assistant's speech.
- - `:toggle context` to toggle the use of conversation history for coherent responses.
- - `:toggle hearing` to toggle the assistant's ability to hear you.

`NOTE: for interacting with the assistant only using your voice, turn on both speech and hearing.`

---

- `:select` - selects from available options.
- - `:select lm_provider` to select the language model provider.
- - `:select lm` to select the model for generating responses.
- - `:select voice` to select a [Piper](https://github.com/rhasspy/piper) text-to-speech model for the assistant to speak with.
- - `:select whisper` to select a [Whisper](https://github.com/openai/whisper) speech-to-text model for the assistant to interpret your voice with.
---
- `:key` sets the LLM API key for the selected provider, run this when the assistant can't process your queries, it means the key most proabably expired.
- `:instruct` instructs the model to behave a certain way, using the [system message](https://promptmetheus.com/resources/llm-knowledge-base/system-message).
- `:status` to see status, selected settings and resource consumption.
- `:mute` to mute the assistant while it's speaking.
- `:cmd` to launch a shell session to run shell commands, run the `exit` command in the shell session to get back to the assistant.
- `:commands` to see all available commands.
- `:clear` to clear the terminal.
- `:exit` to turn the assistant off.

---
### Tasks
- [ ] add shell command execution using shebang, e.g. "!ls".
- [ ] add ollama support, user should be able to choose locally running ollama API as language model provider.
- [ ] User profiling, store general likes and dislikes, e.g. "concise answers", "a project they're working on" etc.

### Contributing
If you don't know where to start, pick one of the pending tasks and start with that. If if you want to fix a bug or add other enhancements, please raise an issue first.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/n1teshy/py-isaac",
    "name": "py-isaac",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "AI assistant, local assistant",
    "author": "Nitesh Yadav",
    "author_email": "nitesh.txt@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5a/dc/8935d53fec74cc2497463f6547acb84b57db1c6d5bd06c70afb9fcc4d158/py-isaac-0.1.3.tar.gz",
    "platform": "Posix; Windows",
    "description": "## I.S.A.A.C - Intelligent System for Advanced Assistance and Companionship\r\n\r\n![Demo Image](https://github.com/n1teshy/py-isaac/blob/main/images/1.png)\r\n\r\nI.S.A.A.C is a completely local, on-terminal AI assistant that lets you use ChatGPT-like features on the terminal so you don't have to switch windows every 2 minutes, it comes with a set of commands and features that you can turn on or off to get the most out of it, I.S.A.A.C can talk to you using locally run speech-to-text and text-to-speech models allowing you to put your fingers to better use.\r\n\r\n- Run `pip install py-isaac`.\r\n- Run `isaac`.\r\n- Type `:commands` to list all available commands.\r\n- Type `hello` and you will be prompted to choose a language model provider.\r\n- Select a provider from [Gemini](https://gemini.google.com/) and [Groq](https://console.groq.com/).\r\n- Now, you will be prompted for an API key.\r\n- Create a [Gemini API key](https://ai.google.dev/gemini-api/docs/api-key) if you selected Gemini or a [Groq API key](https://console.groq.com/keys) if you selected Groq.\r\n- Paste the API key to the prompt.\r\n- Done.\r\n\r\n\r\n### Available commands\r\n- `:toggle`- toggles features on/off.\r\n- - `:toggle speech` to toggle the assistant's speech.\r\n- - `:toggle context` to toggle the use of conversation history for coherent responses.\r\n- - `:toggle hearing` to toggle the assistant's ability to hear you.\r\n\r\n`NOTE: for interacting with the assistant only using your voice, turn on both speech and hearing.`\r\n\r\n---\r\n\r\n- `:select` - selects from available options.\r\n- - `:select lm_provider` to select the language model provider.\r\n- - `:select lm` to select the model for generating responses.\r\n- - `:select voice` to select a [Piper](https://github.com/rhasspy/piper) text-to-speech model for the assistant to speak with.\r\n- - `:select whisper` to select a [Whisper](https://github.com/openai/whisper) speech-to-text model for the assistant to interpret your voice with.\r\n---\r\n- `:key` sets the LLM API key for the selected provider, run this when the assistant can't process your queries, it means the key most proabably expired.\r\n- `:instruct` instructs the model to behave a certain way, using the [system message](https://promptmetheus.com/resources/llm-knowledge-base/system-message).\r\n- `:status` to see status, selected settings and resource consumption.\r\n- `:mute` to mute the assistant while it's speaking.\r\n- `:cmd` to launch a shell session to run shell commands, run the `exit` command in the shell session to get back to the assistant.\r\n- `:commands` to see all available commands.\r\n- `:clear` to clear the terminal.\r\n- `:exit` to turn the assistant off.\r\n\r\n---\r\n### Tasks\r\n- [ ] add shell command execution using shebang, e.g. \"!ls\".\r\n- [ ] add ollama support, user should be able to choose locally running ollama API as language model provider.\r\n- [ ] User profiling, store general likes and dislikes, e.g. \"concise answers\", \"a project they're working on\" etc.\r\n\r\n### Contributing\r\nIf you don't know where to start, pick one of the pending tasks and start with that. If if you want to fix a bug or add other enhancements, please raise an issue first.\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Cooles locally run AI assistant",
    "version": "0.1.3",
    "project_urls": {
        "Homepage": "https://github.com/n1teshy/py-isaac"
    },
    "split_keywords": [
        "ai assistant",
        " local assistant"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4a183635f19ebe13e353fd0688884b7affacbcfd2e6fcfc4d66d773314d17ef1",
                "md5": "c371c067bfac0a3f03668fac74c7fcf6",
                "sha256": "5e566960c3e543b021a378a6bf660d04e007c28148138814f591849be66bad0e"
            },
            "downloads": -1,
            "filename": "py_isaac-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c371c067bfac0a3f03668fac74c7fcf6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 29594,
            "upload_time": "2025-02-15T01:09:46",
            "upload_time_iso_8601": "2025-02-15T01:09:46.934059Z",
            "url": "https://files.pythonhosted.org/packages/4a/18/3635f19ebe13e353fd0688884b7affacbcfd2e6fcfc4d66d773314d17ef1/py_isaac-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5adc8935d53fec74cc2497463f6547acb84b57db1c6d5bd06c70afb9fcc4d158",
                "md5": "9091827e8561ecc848e8db1d23813bf9",
                "sha256": "4d3bb137e5f9cbd7a811755f9dd457e111e3281a6e184470aac25f8620f2783f"
            },
            "downloads": -1,
            "filename": "py-isaac-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "9091827e8561ecc848e8db1d23813bf9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 27494,
            "upload_time": "2025-02-15T01:09:48",
            "upload_time_iso_8601": "2025-02-15T01:09:48.156648Z",
            "url": "https://files.pythonhosted.org/packages/5a/dc/8935d53fec74cc2497463f6547acb84b57db1c6d5bd06c70afb9fcc4d158/py-isaac-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-15 01:09:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "n1teshy",
    "github_project": "py-isaac",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "yapper-tts",
            "specs": []
        },
        {
            "name": "pyreadline3",
            "specs": []
        },
        {
            "name": "py-listener",
            "specs": []
        },
        {
            "name": "psutil",
            "specs": []
        },
        {
            "name": "rich",
            "specs": []
        }
    ],
    "lcname": "py-isaac"
}
        
Elapsed time: 0.95668s