CandyLLM


NameCandyLLM JSON
Version 0.0.6 PyPI version JSON
download
home_pagehttps://github.com/shreyanmitra/llm-wrapper
SummaryCandyLLM: Unified framework for HuggingFace and OpenAI Text-generation Models
upload_time2024-07-17 18:59:30
maintainerNone
docs_urlNone
authorShreyan Mitra
requires_pythonNone
licenseNone
keywords llms transformers mistral llama falcon gpt-4 alpaca
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CandyLLM 🍬

A simple, easy-to-use framework for HuggingFace and OpenAI text-generation models. The goal is to eventually integrate other sources such as custom large language models (LLMs) as well to create a coherent UI.

This is a work-in-progress, so pull-requests and issues are welcome! We try to keep it as stable as possible though, so people installing this library do not have any problems. 

If you use this library, please cite Shreyan Mitra.

With all the administrivia out of the way, here are some examples of how to use the library. We are still setting up the official documentation. The following examples show some use cases, or tasks, and how an user of llm-wrapper would invoke the model of their choice.

## Install package
```
pip install CandyLLM
```

## Task: Fetch Llama3-8b and run it with default parameters on a simple QA Prompt without retrieval augmented generation

```python
from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False)
myLLM.answer("What is the capital of Uzbekistan?") #Returns Tashkent
```
This behavior is due to the fact that the default model is Llama3-8b

## Task: Fetch Llama2-7b and run it with tempereature = 0.6 on an QA Prompt with retrieval augmented generation
```python
from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "Llama2-7b") #or myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "meta-llama/Llama-2-7b-chat-hf", modelNameType="path")
myLLM.answer("What is the capital of Funlandia?", task="QAWithRAG", "The capital of Funlandia is Funtown", temperature=0.6) #Returns Funtown
```

## Task: Fetch GPT-4 and run it with presence_penalty = 0.5 on an Open-Ended Prompt
```python
from CandyLLM import*
myLLM = LLMWrapper("MY_OPENAI_TOKEN", testing=False, source="OpenAI", modelName = "gpt-4-turbo", modelNameType="path")
myLLM.answer("Write a creative essay about sustainability", task="Open-ended", presence_penalty=0.5)
```
## Log out of HuggingFace and OpenAI and remove my API keys from the environment
```python
myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.answer(...) #Do something with the LLM
myLLM.logout()
```

## Check for malicious input prompts
```python
LLMWrapper.promptSafetyCheck("Is 1010 John Doe's social security number?") #Returns false to indicate unsafe prompt
```

## Change Config
Want to use a different model? No need to create another wrapper.
```python
myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.setConfig("MY_TOKEN", testing = False, source="HuggingFace", modelName = "Mistral", modelNameType = "alias") #Tada: a changed LLM wrapper
```

## Dummy LLM
Sometimes, you don't want to spend the time and money to make api calls to an actual LLM, especially if you are testing an UI or an integration of a chat service. Dummy LLMs to the rescue! Our dummy LLM is called "Useless" and it will return answers immediately with very little computation spent (granted, the results it gives are useless - but, hey, what did you expect? 😃)

## CandyUI
CandyUI is the user interface of CandyLLM. It provides a chatbot, a dropdown for choosing the LLM to use, parameter configs for the LLM, and the option to apply post-hoc and pre-hoc methods to the user prompt and LLM output. CandyUI can be integrated into and communicate with a larger UI with custom functions, or you can use the ``selfOutput`` option for the custom post-hoc metrics to be displayed within CandyUI itself.

For example, running
```python
def postprocess(message, response):
    #Sample postprocessor_fn which just returns the difference in length between LLM response and user prompt
    return len(response) - len(message)
x = LLMWrapper.getUI(postprocessor_fn = postprocess, selfOutput = True, selfOutputLabel = "Length Difference")
```
deploys the following webpage:

![Screen Shot 2024-07-17 at 11 53 53 AM](https://github.com/user-attachments/assets/3e0bee23-4cad-427d-8c74-68057c033844)



You can also change how the output is shown. For example, for explainability purposes, you might want to set ```selfOutputType = "HighlightedText"```:

```python
def postprocess(message, response):
    #Randomly assigns importance scores to words in the user prompt
    importantWords = []
    for word in message.split():
        importantWords.append((word, "important")) if len(word) > 3 else importantWords.append((word, "unimportant"))
    return importantWords
x = LLMWrapper.getUI(postprocessor_fn = postprocess, selfOutput = True, selfOutputLabel = "Important Words", selfOutputType = "HighlightedText")

```
The UI now looks like this:
![Screen Shot 2024-07-17 at 11 50 43 AM](https://github.com/user-attachments/assets/83a3f3ae-a566-4fa1-aa9e-a9b3f8751e80)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/shreyanmitra/llm-wrapper",
    "name": "CandyLLM",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "llms transformers mistral llama falcon gpt-4 alpaca",
    "author": "Shreyan Mitra",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/32/63/11344cc33a8fbc83f131805c8c7d4cba3bf34bd5099e3eeb527847402e65/candyllm-0.0.6.tar.gz",
    "platform": null,
    "description": "# CandyLLM \ud83c\udf6c\n\nA simple, easy-to-use framework for HuggingFace and OpenAI text-generation models. The goal is to eventually integrate other sources such as custom large language models (LLMs) as well to create a coherent UI.\n\nThis is a work-in-progress, so pull-requests and issues are welcome! We try to keep it as stable as possible though, so people installing this library do not have any problems. \n\nIf you use this library, please cite Shreyan Mitra.\n\nWith all the administrivia out of the way, here are some examples of how to use the library. We are still setting up the official documentation. The following examples show some use cases, or tasks, and how an user of llm-wrapper would invoke the model of their choice.\n\n## Install package\n```\npip install CandyLLM\n```\n\n## Task: Fetch Llama3-8b and run it with default parameters on a simple QA Prompt without retrieval augmented generation\n\n```python\nfrom CandyLLM import*\nmyLLM = LLMWrapper(\"MY_HF_TOKEN\", testing=False)\nmyLLM.answer(\"What is the capital of Uzbekistan?\") #Returns Tashkent\n```\nThis behavior is due to the fact that the default model is Llama3-8b\n\n## Task: Fetch Llama2-7b and run it with tempereature = 0.6 on an QA Prompt with retrieval augmented generation\n```python\nfrom CandyLLM import*\nmyLLM = LLMWrapper(\"MY_HF_TOKEN\", testing=False, modelName = \"Llama2-7b\") #or myLLM = LLMWrapper(\"MY_HF_TOKEN\", testing=False, modelName = \"meta-llama/Llama-2-7b-chat-hf\", modelNameType=\"path\")\nmyLLM.answer(\"What is the capital of Funlandia?\", task=\"QAWithRAG\", \"The capital of Funlandia is Funtown\", temperature=0.6) #Returns Funtown\n```\n\n## Task: Fetch GPT-4 and run it with presence_penalty = 0.5 on an Open-Ended Prompt\n```python\nfrom CandyLLM import*\nmyLLM = LLMWrapper(\"MY_OPENAI_TOKEN\", testing=False, source=\"OpenAI\", modelName = \"gpt-4-turbo\", modelNameType=\"path\")\nmyLLM.answer(\"Write a creative essay about sustainability\", task=\"Open-ended\", presence_penalty=0.5)\n```\n## Log out of HuggingFace and OpenAI and remove my API keys from the environment\n```python\nmyLLM = LLMWrapper(...) #Create some LLM wrapper\nmyLLM.answer(...) #Do something with the LLM\nmyLLM.logout()\n```\n\n## Check for malicious input prompts\n```python\nLLMWrapper.promptSafetyCheck(\"Is 1010 John Doe's social security number?\") #Returns false to indicate unsafe prompt\n```\n\n## Change Config\nWant to use a different model? No need to create another wrapper.\n```python\nmyLLM = LLMWrapper(...) #Create some LLM wrapper\nmyLLM.setConfig(\"MY_TOKEN\", testing = False, source=\"HuggingFace\", modelName = \"Mistral\", modelNameType = \"alias\") #Tada: a changed LLM wrapper\n```\n\n## Dummy LLM\nSometimes, you don't want to spend the time and money to make api calls to an actual LLM, especially if you are testing an UI or an integration of a chat service. Dummy LLMs to the rescue! Our dummy LLM is called \"Useless\" and it will return answers immediately with very little computation spent (granted, the results it gives are useless - but, hey, what did you expect? \ud83d\ude03)\n\n## CandyUI\nCandyUI is the user interface of CandyLLM. It provides a chatbot, a dropdown for choosing the LLM to use, parameter configs for the LLM, and the option to apply post-hoc and pre-hoc methods to the user prompt and LLM output. CandyUI can be integrated into and communicate with a larger UI with custom functions, or you can use the ``selfOutput`` option for the custom post-hoc metrics to be displayed within CandyUI itself.\n\nFor example, running\n```python\ndef postprocess(message, response):\n    #Sample postprocessor_fn which just returns the difference in length between LLM response and user prompt\n    return len(response) - len(message)\nx = LLMWrapper.getUI(postprocessor_fn = postprocess, selfOutput = True, selfOutputLabel = \"Length Difference\")\n```\ndeploys the following webpage:\n\n![Screen Shot 2024-07-17 at 11 53 53 AM](https://github.com/user-attachments/assets/3e0bee23-4cad-427d-8c74-68057c033844)\n\n\n\nYou can also change how the output is shown. For example, for explainability purposes, you might want to set ```selfOutputType = \"HighlightedText\"```:\n\n```python\ndef postprocess(message, response):\n    #Randomly assigns importance scores to words in the user prompt\n    importantWords = []\n    for word in message.split():\n        importantWords.append((word, \"important\")) if len(word) > 3 else importantWords.append((word, \"unimportant\"))\n    return importantWords\nx = LLMWrapper.getUI(postprocessor_fn = postprocess, selfOutput = True, selfOutputLabel = \"Important Words\", selfOutputType = \"HighlightedText\")\n\n```\nThe UI now looks like this:\n![Screen Shot 2024-07-17 at 11 50 43 AM](https://github.com/user-attachments/assets/83a3f3ae-a566-4fa1-aa9e-a9b3f8751e80)\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "CandyLLM: Unified framework for HuggingFace and OpenAI Text-generation Models",
    "version": "0.0.6",
    "project_urls": {
        "Homepage": "https://github.com/shreyanmitra/llm-wrapper"
    },
    "split_keywords": [
        "llms",
        "transformers",
        "mistral",
        "llama",
        "falcon",
        "gpt-4",
        "alpaca"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1de8e55a3e332eb36de1a614c777ae57f038eb36913924ba7ab6314b944f4969",
                "md5": "2df245603874b928dbd023b9d39f8bbb",
                "sha256": "95db96db0d90aba5fc5690db4b7fc79c47239501257c9b474509e5557e2b5e9b"
            },
            "downloads": -1,
            "filename": "CandyLLM-0.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2df245603874b928dbd023b9d39f8bbb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 10039,
            "upload_time": "2024-07-17T18:59:29",
            "upload_time_iso_8601": "2024-07-17T18:59:29.280077Z",
            "url": "https://files.pythonhosted.org/packages/1d/e8/e55a3e332eb36de1a614c777ae57f038eb36913924ba7ab6314b944f4969/CandyLLM-0.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "326311344cc33a8fbc83f131805c8c7d4cba3bf34bd5099e3eeb527847402e65",
                "md5": "0d3f56e6eb142052b2221c7c856963a8",
                "sha256": "5ea6d401f297b63a4ff1794d52ccb2e5f345326b1964db82ab54527f2b2a3501"
            },
            "downloads": -1,
            "filename": "candyllm-0.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "0d3f56e6eb142052b2221c7c856963a8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 9532,
            "upload_time": "2024-07-17T18:59:30",
            "upload_time_iso_8601": "2024-07-17T18:59:30.681065Z",
            "url": "https://files.pythonhosted.org/packages/32/63/11344cc33a8fbc83f131805c8c7d4cba3bf34bd5099e3eeb527847402e65/candyllm-0.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-17 18:59:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "shreyanmitra",
    "github_project": "llm-wrapper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "candyllm"
}
        
Elapsed time: 0.79658s