bingbong


Namebingbong JSON
Version 0.4.3 PyPI version JSON
download
home_pagehttps://github.com/deep-diver/PingPong
SummaryPing pong is a management library for LLM applied applications.
upload_time2023-11-17 05:35:45
maintainer
docs_urlNone
authorchansung park
requires_python>=3.8
license
keywords llm pingpong prompt context management
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PingPong

<p align="center">
    <img width="200" src="https://raw.githubusercontent.com/deep-diver/PingPong/main/assets/logo.png">
</p>

PingPong is a simple library to manage pings(prompt) and pongs(response). The main purpose of this library is to manage histories and contexts in LLM applied applications such as ChatGPT.

The basic motivations behind this project are:
- **Abstract prompt and response so that any UIs and Prompt formats can be adopted**
  - There are a number of instruction-following finetuned language models, but they are fine-tuned with differently crafted datasets. For instance, the Alpaca dataset works when `### Instruction:`, `### Response:`, and `### Input:` are given while StackLLaMA works when `Question:` and `Answer:` are given even though the underlying pre-trained LLM is the same LLaMA.
  - There are a number of UIs built to interact with language model such as Chatbot. Even with a single example of Chatbot, one could use Gradio while other could use JavaScript based tools, and they represents prompt histories in different data structure. 
- **Abstract context management strategies to apply any number of context managements**
  - There could be a number of strategies to effectively handle context due to the limit of the number of input tokens in language model(usually 4096). It is also possible to mix different strategies.

## Installation

```shell
$ pip install bingbong
```

## Example usage

```python
from pingpong import PingPong
from pingpong.gradio import GradioAlpacaChatPPManager
from pingpong.context import CtxAutoSummaryStrategy
from pingpong.context import CtxLastWindowStrategy
from pingpong.context import CtxSearchWindowStrategy

ppmanager = GradioAlpacaChatPPManager()
strategies = [
    CtxAutoSummaryStrategy(2),
    CtxLastWindowStrategy(1),
    CtxSearchWindowStrategy(1)
]

for i in range(3):
    ppmanager.add_pingpong(PingPong(f"ping-{i}", f"pong-{i}"))

    for strategy in strategies:
        if isinstance(strategy, CtxAutoSummaryStrategy):
            sum_req, to_sum_prompt = strategy(ppmanager)

            if sum_req is True:
                # enough prompts are accumulated
                ...
        elif isinstance(strategy, CtxLastWindowStrategy):
            last_convs = strategy(ppmanager)

            # I am only interested in the last 1 conversations
            ...
        elif isinstance(strategy, CtxSearchWindowStrategy):
            for cur_win in strategy(ppmanager):
                # looking the entire conversation through
                # a sliding window, size of 1
                # find out relevant history to the recent conversation
                ...
```

## Todos

- [ ] Add a working example with Gradio application
- [ ] Make the documentation more Beginner friendly 

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/deep-diver/PingPong",
    "name": "bingbong",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "LLM,pingpong,prompt,context,management",
    "author": "chansung park",
    "author_email": "deep.diver.csp@gmail.com",
    "download_url": "",
    "platform": null,
    "description": "# PingPong\n\n<p align=\"center\">\n    <img width=\"200\" src=\"https://raw.githubusercontent.com/deep-diver/PingPong/main/assets/logo.png\">\n</p>\n\nPingPong is a simple library to manage pings(prompt) and pongs(response). The main purpose of this library is to manage histories and contexts in LLM applied applications such as ChatGPT.\n\nThe basic motivations behind this project are:\n- **Abstract prompt and response so that any UIs and Prompt formats can be adopted**\n  - There are a number of instruction-following finetuned language models, but they are fine-tuned with differently crafted datasets. For instance, the Alpaca dataset works when `### Instruction:`, `### Response:`, and `### Input:` are given while StackLLaMA works when `Question:` and `Answer:` are given even though the underlying pre-trained LLM is the same LLaMA.\n  - There are a number of UIs built to interact with language model such as Chatbot. Even with a single example of Chatbot, one could use Gradio while other could use JavaScript based tools, and they represents prompt histories in different data structure. \n- **Abstract context management strategies to apply any number of context managements**\n  - There could be a number of strategies to effectively handle context due to the limit of the number of input tokens in language model(usually 4096). It is also possible to mix different strategies.\n\n## Installation\n\n```shell\n$ pip install bingbong\n```\n\n## Example usage\n\n```python\nfrom pingpong import PingPong\nfrom pingpong.gradio import GradioAlpacaChatPPManager\nfrom pingpong.context import CtxAutoSummaryStrategy\nfrom pingpong.context import CtxLastWindowStrategy\nfrom pingpong.context import CtxSearchWindowStrategy\n\nppmanager = GradioAlpacaChatPPManager()\nstrategies = [\n    CtxAutoSummaryStrategy(2),\n    CtxLastWindowStrategy(1),\n    CtxSearchWindowStrategy(1)\n]\n\nfor i in range(3):\n    ppmanager.add_pingpong(PingPong(f\"ping-{i}\", f\"pong-{i}\"))\n\n    for strategy in strategies:\n        if isinstance(strategy, CtxAutoSummaryStrategy):\n            sum_req, to_sum_prompt = strategy(ppmanager)\n\n            if sum_req is True:\n                # enough prompts are accumulated\n                ...\n        elif isinstance(strategy, CtxLastWindowStrategy):\n            last_convs = strategy(ppmanager)\n\n            # I am only interested in the last 1 conversations\n            ...\n        elif isinstance(strategy, CtxSearchWindowStrategy):\n            for cur_win in strategy(ppmanager):\n                # looking the entire conversation through\n                # a sliding window, size of 1\n                # find out relevant history to the recent conversation\n                ...\n```\n\n## Todos\n\n- [ ] Add a working example with Gradio application\n- [ ] Make the documentation more Beginner friendly \n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Ping pong is a management library for LLM applied applications.",
    "version": "0.4.3",
    "project_urls": {
        "Homepage": "https://github.com/deep-diver/PingPong"
    },
    "split_keywords": [
        "llm",
        "pingpong",
        "prompt",
        "context",
        "management"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4135326a3aa0e075ddf9c14278d9d582edb47235a591d0aa72cdbdcf5ab38b1c",
                "md5": "c3981899105267fb2d01c33a6423da11",
                "sha256": "9a62de34cffff49f744b87adb40416d185af1740d72fd086ce1d56557178de88"
            },
            "downloads": -1,
            "filename": "bingbong-0.4.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c3981899105267fb2d01c33a6423da11",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 23289,
            "upload_time": "2023-11-17T05:35:45",
            "upload_time_iso_8601": "2023-11-17T05:35:45.845853Z",
            "url": "https://files.pythonhosted.org/packages/41/35/326a3aa0e075ddf9c14278d9d582edb47235a591d0aa72cdbdcf5ab38b1c/bingbong-0.4.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-17 05:35:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "deep-diver",
    "github_project": "PingPong",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "bingbong"
}
        
Elapsed time: 0.15666s