# <img src='' card_color='#40DBB0' width='50' height='50' style='vertical-align:bottom'/> AlpacaPP Persona
Give OpenVoiceOS some sass with [AlpacaCPP](https://github.com/antimatter15/alpaca.cpp)
## Examples
* "What is best in life?"
* "Do you like dogs"
* "Does God exist?"
## Usage
Spoken answers api
```python
from ovos_solver_alpacacpp import AlpacaCPPSolver
ALPACA_MODEL_FILE = "/./models/ggml-alpaca-7b-q4.bin"
bot = AlpacaCPPSolver({"model": ALPACA_MODEL_FILE})
sentence = bot.spoken_answer("Qual é o teu animal favorito?", {"lang": "pt-pt"})
# Meus animais favoritos são cães, gatos e tartarugas!
for q in ["Does god exist?",
"what is the speed of light?",
"what is the meaning of life?",
"What is your favorite color?",
"What is best in life?"]:
a = bot.get_spoken_answer(q)
print(q, a)
```
## Credit
This combines [Facebook's LLaMA](https://github.com/facebookresearch/llama), [Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [alpaca-lora](https://github.com/tloen/alpaca-lora) and [corresponding weights](https://huggingface.co/tloen/alpaca-lora-7b/tree/main) by Eric Wang (which uses [Jason Phang's implementation of LLaMA](https://github.com/huggingface/transformers/pull/21955) on top of Hugging Face Transformers), and [llama.cpp](https://github.com/ggerganov/llama.cpp) by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's [Interactive Mode](https://github.com/ggerganov/llama.cpp/pull/61) for llama.cpp. Inspired by [Simon Willison's](https://til.simonwillison.net/llms/llama-7b-m2) getting started guide for LLaMA. [Andy Matuschak](https://twitter.com/andy_matuschak/status/1636769182066053120)'s thread on adapting this to 13B, using fine tuning weights by [Sam Witteveen](https://huggingface.co/samwit/alpaca13B-lora).
## Disclaimer
Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.
Raw data
{
"_id": null,
"home_page": "https://github.com/OpenVoiceOS/ovos-solver-plugin-alpacacpp",
"name": "ovos-solver-alpacacpp-plugin",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "ovos plugin utterance fallback query",
"author": "jarbasai",
"author_email": "jarbasai@mailfence.com",
"download_url": "https://files.pythonhosted.org/packages/3a/82/64066255cac07cd97f06762daab95d999ffb3656cd030351b4277dece26b/ovos-solver-alpacacpp-plugin-0.0.0a1.tar.gz",
"platform": null,
"description": "# <img src='' card_color='#40DBB0' width='50' height='50' style='vertical-align:bottom'/> AlpacaPP Persona\n\nGive OpenVoiceOS some sass with [AlpacaCPP](https://github.com/antimatter15/alpaca.cpp)\n\n## Examples \n* \"What is best in life?\"\n* \"Do you like dogs\"\n* \"Does God exist?\"\n\n\n## Usage\n\nSpoken answers api\n\n```python\nfrom ovos_solver_alpacacpp import AlpacaCPPSolver\n\nALPACA_MODEL_FILE = \"/./models/ggml-alpaca-7b-q4.bin\"\n\nbot = AlpacaCPPSolver({\"model\": ALPACA_MODEL_FILE})\n\nsentence = bot.spoken_answer(\"Qual \u00e9 o teu animal favorito?\", {\"lang\": \"pt-pt\"})\n# Meus animais favoritos s\u00e3o c\u00e3es, gatos e tartarugas!\n\nfor q in [\"Does god exist?\",\n \"what is the speed of light?\",\n \"what is the meaning of life?\",\n \"What is your favorite color?\",\n \"What is best in life?\"]:\n a = bot.get_spoken_answer(q)\n print(q, a)\n```\n\n\n## Credit\n\nThis combines [Facebook's LLaMA](https://github.com/facebookresearch/llama), [Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [alpaca-lora](https://github.com/tloen/alpaca-lora) and [corresponding weights](https://huggingface.co/tloen/alpaca-lora-7b/tree/main) by Eric Wang (which uses [Jason Phang's implementation of LLaMA](https://github.com/huggingface/transformers/pull/21955) on top of Hugging Face Transformers), and [llama.cpp](https://github.com/ggerganov/llama.cpp) by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's [Interactive Mode](https://github.com/ggerganov/llama.cpp/pull/61) for llama.cpp. Inspired by [Simon Willison's](https://til.simonwillison.net/llms/llama-7b-m2) getting started guide for LLaMA. [Andy Matuschak](https://twitter.com/andy_matuschak/status/1636769182066053120)'s thread on adapting this to 13B, using fine tuning weights by [Sam Witteveen](https://huggingface.co/samwit/alpaca13B-lora). \n\n\n## Disclaimer\n\nNote that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models. \n\n\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A question solver plugin for ovos",
"version": "0.0.0a1",
"split_keywords": [
"ovos",
"plugin",
"utterance",
"fallback",
"query"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "86c3a85d13443e57bf03804a8eef9b6e7f3f7d13359bcbd34e1d370845421efe",
"md5": "16e2d7ac91e4bbb8c4737b503a504453",
"sha256": "dff3a6b267436225bbef7794a8f963af33031ea185d990eaa9da429fe1c3da7b"
},
"downloads": -1,
"filename": "ovos_solver_alpacacpp_plugin-0.0.0a1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "16e2d7ac91e4bbb8c4737b503a504453",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 9029,
"upload_time": "2023-03-25T03:45:23",
"upload_time_iso_8601": "2023-03-25T03:45:23.955666Z",
"url": "https://files.pythonhosted.org/packages/86/c3/a85d13443e57bf03804a8eef9b6e7f3f7d13359bcbd34e1d370845421efe/ovos_solver_alpacacpp_plugin-0.0.0a1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3a8264066255cac07cd97f06762daab95d999ffb3656cd030351b4277dece26b",
"md5": "b69c4e231bd9f532c23ec9367e63cd09",
"sha256": "026cad67f521446e424496ec3b3c71d07e44fce9979eda4922aca34602bea913"
},
"downloads": -1,
"filename": "ovos-solver-alpacacpp-plugin-0.0.0a1.tar.gz",
"has_sig": false,
"md5_digest": "b69c4e231bd9f532c23ec9367e63cd09",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 8207,
"upload_time": "2023-03-25T03:45:26",
"upload_time_iso_8601": "2023-03-25T03:45:26.575489Z",
"url": "https://files.pythonhosted.org/packages/3a/82/64066255cac07cd97f06762daab95d999ffb3656cd030351b4277dece26b/ovos-solver-alpacacpp-plugin-0.0.0a1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-03-25 03:45:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "OpenVoiceOS",
"github_project": "ovos-solver-plugin-alpacacpp",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "ovos-solver-llamacpp-plugin",
"specs": []
}
],
"lcname": "ovos-solver-alpacacpp-plugin"
}