neon-skill-fallback-llm


Nameneon-skill-fallback-llm JSON
Version 1.0.1 PyPI version JSON
download
home_pagehttps://github.com/NeonGeckoCom/skill-fallback_llm
Summary
upload_time2023-10-27 00:09:18
maintainer
docs_urlNone
authorNeongecko
requires_python
licenseBSD-3-Clause
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img src='./logo.svg' card_color="#FF8600" width="50" style="vertical-align:bottom" style="vertical-align:bottom">LLM Fallback  
  
## Summary
Get an LLM response from the Neon Diana backend.

## Description
Converse with an LLM and enable LLM responses when Neon doesn't have a better
response.

To send a single query to an LLM, you can ask Neon to "ask Chat GPT <something>".
To start conversing with an LLM, ask to "talk to Chat GPT" and have all of your input
sent to an LLM until you say goodbye or stop talking for a while.

Enable fallback behavior by asking to "enable LLM fallback skill" or disable it
by asking to "disable LLM fallback".

To have a copy of LLM interactions sent via email, ask Neon to 
"email me a copy of our conversation".

## Examples 

* "Explain quantum computing in simple terms"
* "Ask chat GPT what an LLM is"
* "Talk to chat GPT"
* "Enable LLM fallback skill"
* "Disable LLM fallback skill"
* "Email me a copy of our conversation"

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NeonGeckoCom/skill-fallback_llm",
    "name": "neon-skill-fallback-llm",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Neongecko",
    "author_email": "developers@neon.ai",
    "download_url": "https://files.pythonhosted.org/packages/0e/ef/38691ee19f85c73a710e71f0b44d85064ec230c4dd167671b987be7f1d93/neon-skill-fallback_llm-1.0.1.tar.gz",
    "platform": null,
    "description": "# <img src='./logo.svg' card_color=\"#FF8600\" width=\"50\" style=\"vertical-align:bottom\" style=\"vertical-align:bottom\">LLM Fallback  \n  \n## Summary\nGet an LLM response from the Neon Diana backend.\n\n## Description\nConverse with an LLM and enable LLM responses when Neon doesn't have a better\nresponse.\n\nTo send a single query to an LLM, you can ask Neon to \"ask Chat GPT <something>\".\nTo start conversing with an LLM, ask to \"talk to Chat GPT\" and have all of your input\nsent to an LLM until you say goodbye or stop talking for a while.\n\nEnable fallback behavior by asking to \"enable LLM fallback skill\" or disable it\nby asking to \"disable LLM fallback\".\n\nTo have a copy of LLM interactions sent via email, ask Neon to \n\"email me a copy of our conversation\".\n\n## Examples \n\n* \"Explain quantum computing in simple terms\"\n* \"Ask chat GPT what an LLM is\"\n* \"Talk to chat GPT\"\n* \"Enable LLM fallback skill\"\n* \"Disable LLM fallback skill\"\n* \"Email me a copy of our conversation\"\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "",
    "version": "1.0.1",
    "project_urls": {
        "Homepage": "https://github.com/NeonGeckoCom/skill-fallback_llm"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9b64def80c98083e1d70952618d5b005ec86d9cf306771b006d5f4b4b98a45f0",
                "md5": "c41a6497355d45dbb7b3f09ac093f5fe",
                "sha256": "c9d4f5e8725d744f9dff278da55ca6652102ef453d43589b4ac0e234bdf594ea"
            },
            "downloads": -1,
            "filename": "neon_skill_fallback_llm-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c41a6497355d45dbb7b3f09ac093f5fe",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 29301,
            "upload_time": "2023-10-27T00:09:17",
            "upload_time_iso_8601": "2023-10-27T00:09:17.130525Z",
            "url": "https://files.pythonhosted.org/packages/9b/64/def80c98083e1d70952618d5b005ec86d9cf306771b006d5f4b4b98a45f0/neon_skill_fallback_llm-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0eef38691ee19f85c73a710e71f0b44d85064ec230c4dd167671b987be7f1d93",
                "md5": "eafbcb8387ce3c5374ee9117dcf22281",
                "sha256": "a3b8e3501687b0074c0b903be7c82d59b5555ebf407d03aa18d4661464b9443e"
            },
            "downloads": -1,
            "filename": "neon-skill-fallback_llm-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "eafbcb8387ce3c5374ee9117dcf22281",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 14699,
            "upload_time": "2023-10-27T00:09:18",
            "upload_time_iso_8601": "2023-10-27T00:09:18.614840Z",
            "url": "https://files.pythonhosted.org/packages/0e/ef/38691ee19f85c73a710e71f0b44d85064ec230c4dd167671b987be7f1d93/neon-skill-fallback_llm-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-27 00:09:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NeonGeckoCom",
    "github_project": "skill-fallback_llm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "neon-skill-fallback-llm"
}
        
Elapsed time: 0.78870s