llmpool


Namellmpool JSON
Version 0.2.2 PyPI version JSON
download
home_pagehttps://github.com/deep-diver/LLM-Pool
SummaryLarge Language Models' pool management library
upload_time2023-05-25 02:29:46
maintainer
docs_urlNone
authorchansung park
requires_python
license
keywords llm instance pool management
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM-Pool

This simple project is to manage multiple LLM(Large Language Model)s in one place. Because there are too many fine-tuned LLMs, and it is hard to evaluate which one is bettern than others, it might be useful to test as many models as possible. Below is the two useful usecases that I had in mind when kicking off this project.

- compare generated text from different models side by side
- complete conversation in collaboration of different models

![](https://i.ibb.co/GH55nWs/2023-05-09-12-09-58.png)

## Usecase

```python

from llmpool import LLModelPool
from llmpool import LocalLoRAModel
from llmpool import RemoteTxtGenIfLLModel

from transformers import AutoModelForCausalLM

model_pool = LLModelPool()
model_pool.add_models(
  # alpaca-lora 13b
  LocalLoRALLModel(
    "alpaca-lora-13b",
    "elinas/llama-13b-hf-transformers-4.29",
    "LLMs/Alpaca-LoRA-EvolInstruct-13B",
    model_cls=AutoModelForCausalLM
  ),
  
  RemoteTxtGenIfLLModel(
    "stable-vicuna-13b",
    "https://...:8080"
  ),
)

for model in model_pool:
  result = model.batch_gen(
    ["hello world"], 
    GenerationConfig(...)
  )
  print(result)
  
  _, stream_result = model.stream_gen(
    "hello world",
    GenerationConfig(...)
  )

  for ret in stream_results:
    if instanceof(model, LocalLoRALLModel) or \
      instanceof(model, LocalLLModel):
      print(ret, end='')
    else:
      print(ret.token.text, end='')

```

Alternatively, you can organize the model pool with yaml file

```python
from llmpool import instantiate_models

model_pool = instantiate_models('...yaml')
```

## Todo
- [ ] Add example notebooks
- [X] Yaml parser to add models to model pool

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/deep-diver/LLM-Pool",
    "name": "llmpool",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "LLM,instance pool,management",
    "author": "chansung park",
    "author_email": "deep.diver.csp@gmail.com",
    "download_url": "",
    "platform": null,
    "description": "# LLM-Pool\n\nThis simple project is to manage multiple LLM(Large Language Model)s in one place. Because there are too many fine-tuned LLMs, and it is hard to evaluate which one is bettern than others, it might be useful to test as many models as possible. Below is the two useful usecases that I had in mind when kicking off this project.\n\n- compare generated text from different models side by side\n- complete conversation in collaboration of different models\n\n![](https://i.ibb.co/GH55nWs/2023-05-09-12-09-58.png)\n\n## Usecase\n\n```python\n\nfrom llmpool import LLModelPool\nfrom llmpool import LocalLoRAModel\nfrom llmpool import RemoteTxtGenIfLLModel\n\nfrom transformers import AutoModelForCausalLM\n\nmodel_pool = LLModelPool()\nmodel_pool.add_models(\n  # alpaca-lora 13b\n  LocalLoRALLModel(\n    \"alpaca-lora-13b\",\n    \"elinas/llama-13b-hf-transformers-4.29\",\n    \"LLMs/Alpaca-LoRA-EvolInstruct-13B\",\n    model_cls=AutoModelForCausalLM\n  ),\n  \n  RemoteTxtGenIfLLModel(\n    \"stable-vicuna-13b\",\n    \"https://...:8080\"\n  ),\n)\n\nfor model in model_pool:\n  result = model.batch_gen(\n    [\"hello world\"], \n    GenerationConfig(...)\n  )\n  print(result)\n  \n  _, stream_result = model.stream_gen(\n    \"hello world\",\n    GenerationConfig(...)\n  )\n\n  for ret in stream_results:\n    if instanceof(model, LocalLoRALLModel) or \\\n      instanceof(model, LocalLLModel):\n      print(ret, end='')\n    else:\n      print(ret.token.text, end='')\n\n```\n\nAlternatively, you can organize the model pool with yaml file\n\n```python\nfrom llmpool import instantiate_models\n\nmodel_pool = instantiate_models('...yaml')\n```\n\n## Todo\n- [ ] Add example notebooks\n- [X] Yaml parser to add models to model pool\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Large Language Models' pool management library",
    "version": "0.2.2",
    "project_urls": {
        "Homepage": "https://github.com/deep-diver/LLM-Pool"
    },
    "split_keywords": [
        "llm",
        "instance pool",
        "management"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8cd3f270f3a6f85354394eb9b4375c4c0b1a7830cbded4cf0e8d48fe57c9ece4",
                "md5": "57501e6f6616cde7e441f26b4bfd6722",
                "sha256": "f4856c17b43f9d7de069619be5cfc82dee9733d41e2c2d110b6cf79d328115fb"
            },
            "downloads": -1,
            "filename": "llmpool-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "57501e6f6616cde7e441f26b4bfd6722",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 9674,
            "upload_time": "2023-05-25T02:29:46",
            "upload_time_iso_8601": "2023-05-25T02:29:46.020168Z",
            "url": "https://files.pythonhosted.org/packages/8c/d3/f270f3a6f85354394eb9b4375c4c0b1a7830cbded4cf0e8d48fe57c9ece4/llmpool-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-25 02:29:46",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "deep-diver",
    "github_project": "LLM-Pool",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "llmpool"
}
        
Elapsed time: 0.99103s