Name | llm-client JSON |
Version |
0.8.0
JSON |
| download |
home_page | |
Summary | SDK for using LLM |
upload_time | 2023-07-22 20:30:31 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.9 |
license | |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LLM-Client-SDK
[](https://github.com/uripeled2/llm-client-sdk/actions/workflows/test.yml)
[](https://opensource.org/licenses/MIT)
LLM-Client-SDK is an SDK for seamless integration with generative AI large language models
(We currently support - OpenAI, Google, AI21, HuggingfaceHub, Aleph Alpha, Anthropic,
Local models with transformers - and many more soon).
Our vision is to provide async native and production ready SDK while creating
a powerful and fast integration with different LLM without letting the user lose
any flexibility (API params, endpoints etc.). *We also provide sync version, see
more details below in Usage section.
## Base Interface
The package exposes two simple interfaces for seamless integration with LLMs (In the future, we
will expand the interface to support more tasks like list models, edits, etc.):
```python
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from typing import Any, Optional
from enum import Enum
from dataclasses_json import dataclass_json, config
from aiohttp import ClientSession
class BaseLLMClient(ABC):
@abstractmethod
async def text_completion(self, prompt: str, **kwargs) -> list[str]:
raise NotImplementedError()
async def get_tokens_count(self, text: str, **kwargs) -> int:
raise NotImplementedError()
class Role(Enum):
SYSTEM = "system"
USER = "user"
ASSISTANT = "assistant"
@dataclass_json
@dataclass
class ChatMessage:
role: Role = field(metadata=config(encoder=lambda role: role.value, decoder=Role))
content: str
name: Optional[str] = field(default=None, metadata=config(exclude=lambda name: name is None))
example: bool = field(default=False, metadata=config(exclude=lambda _: True))
@dataclass
class LLMAPIClientConfig:
api_key: str
session: ClientSession
base_url: Optional[str] = None
default_model: Optional[str] = None
headers: dict[str, Any] = field(default_factory=dict)
class BaseLLMAPIClient(BaseLLMClient, ABC):
def __init__(self, config: LLMAPIClientConfig):
...
@abstractmethod
async def text_completion(self, prompt: str, model: Optional[str] = None, max_tokens: int | None = None,
temperature: Optional[float] = None, top_p: Optional[float] = None, **kwargs) -> list[str]:
raise NotImplementedError()
async def chat_completion(self, messages: list[ChatMessage], temperature: float = 0,
max_tokens: int = 16, model: Optional[str] = None, **kwargs) -> list[str]:
raise NotImplementedError()
async def embedding(self, text: str, model: Optional[str] = None, **kwargs) -> list[float]:
raise NotImplementedError()
async def get_chat_tokens_count(self, messages: list[ChatMessage], **kwargs) -> int:
raise NotImplementedError()
```
## Requirements
Python 3.9+
## Installation
If you are worried about the size of the package you can install only the clients you need,
by default we install none of the clients.
For all current clients support
```console
$ pip install llm-client[all]
```
For only the base interface and some light LLMs clients (AI21 and Aleph Alpha)
```console
$ pip install llm-client
```
### Optional Dependencies
For all current api clients support
```console
$ pip install llm-client[api]
```
For only local client support
```console
$ pip install llm-client[local]
```
For sync support
```console
$ pip install llm-client[sync]
```
For only OpenAI support
```console
$ pip install llm-client[openai]
```
For only HuggingFace support
```console
$ pip install llm-client[huggingface]
```
## Usage
Using OpenAI directly through OpenAIClient - Maximum control and best practice in production
```python
import os
from aiohttp import ClientSession
from llm_client import ChatMessage, Role, OpenAIClient, LLMAPIClientConfig
OPENAI_API_KEY = os.environ["API_KEY"]
OPENAI_ORG_ID = os.getenv("ORG_ID")
async def main():
async with ClientSession() as session:
llm_client = OpenAIClient(LLMAPIClientConfig(OPENAI_API_KEY, session, default_model="text-davinci-003",
headers={"OpenAI-Organization": OPENAI_ORG_ID})) # The headers are optional
text = "This is indeed a test"
messages = [ChatMessage(role=Role.USER, content="Hello!"),
ChatMessage(role=Role.SYSTEM, content="Hi there! How can I assist you today?")]
print("number of tokens:", await llm_client.get_tokens_count(text)) # 5
print("number of tokens for chat completion:", await llm_client.get_chat_tokens_count(messages, model="gpt-3.5-turbo")) # 23
print("generated chat:", await llm_client.chat_completion(messages, model="gpt-3.5-turbo")) # ['Hi there! How can I assist you today?']
print("generated text:", await llm_client.text_completion(text)) # [' string\n\nYes, this is a test string. Test strings are used to']
print("generated embedding:", await llm_client.embedding(text)) # [0.0023064255, -0.009327292, ...]
```
Using LLMAPIClientFactory - Perfect if you want to move fast and to not handle the client session yourself
```python
import os
from llm_client import LLMAPIClientFactory, LLMAPIClientType
OPENAI_API_KEY = os.environ["API_KEY"]
async def main():
async with LLMAPIClientFactory() as llm_api_client_factory:
llm_client = llm_api_client_factory.get_llm_api_client(LLMAPIClientType.OPEN_AI,
api_key=OPENAI_API_KEY,
default_model="text-davinci-003")
await llm_client.text_completion(prompt="This is indeed a test")
await llm_client.text_completion(prompt="This is indeed a test", max_tokens=50)
# Or if you don't want to use async
from llm_client import init_sync_llm_api_client
llm_client = init_sync_llm_api_client(LLMAPIClientType.OPEN_AI, api_key=OPENAI_API_KEY,
default_model="text-davinci-003")
llm_client.text_completion(prompt="This is indeed a test")
llm_client.text_completion(prompt="This is indeed a test", max_tokens=50)
```
Local model
```python
import os
from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer
from llm_client import LocalClientConfig, LocalClient
async def main():
try:
model = AutoModelForCausalLM.from_pretrained(os.environ["MODEL_NAME_OR_PATH"])
except ValueError:
model = AutoModelForSeq2SeqLM.from_pretrained(os.environ["MODEL_NAME_OR_PATH"])
tokenizer = AutoTokenizer.from_pretrained(os.environ["MODEL_NAME_OR_PATH"])
llm_client = LocalClient(LocalClientConfig(model, tokenizer, os.environ["TENSORS_TYPE"], os.environ["DEVICE"]))
await llm_client.text_completion(prompt="This is indeed a test")
await llm_client.text_completion(prompt="This is indeed a test", max_tokens=50)
# Or if you don't want to use async
import async_to_sync
try:
model = AutoModelForCausalLM.from_pretrained(os.environ["MODEL_NAME_OR_PATH"])
except ValueError:
model = AutoModelForSeq2SeqLM.from_pretrained(os.environ["MODEL_NAME_OR_PATH"])
tokenizer = AutoTokenizer.from_pretrained(os.environ["MODEL_NAME_OR_PATH"])
llm_client = LocalClient(LocalClientConfig(model, tokenizer, os.environ["TENSORS_TYPE"], os.environ["DEVICE"]))
llm_client = async_to_sync.methods(llm_client)
llm_client.text_completion(prompt="This is indeed a test")
llm_client.text_completion(prompt="This is indeed a test", max_tokens=50)
```
## Contributing
Contributions are welcome! Please check out the todos below, and feel free to open issue or a pull request.
### Todo
*The list is unordered*
- [x] Add support for more LLMs
- [x] Anthropic
- [x] Google
- [ ] Cohere
- [x] Add support for more functions via LLMs
- [x] embeddings
- [x] chat
- [ ] list models
- [ ] edits
- [ ] more
- [ ] Add contributing guidelines and linter
- [ ] Create an easy way to run multiple LLMs in parallel with the same prompts
- [x] Convert common models parameter
- [x] temperature
- [x] max_tokens
- [x] top_p
- [ ] more
### Development
To install the package in development mode, run the following command:
```console
$ pip install -e ".[all,test]"
```
To run the tests, run the following command:
```console
$ pytest tests
```
If you want to add a new LLMClient you need to implement BaseLLMClient or BaseLLMAPIClient.
If you are adding a BaseLLMAPIClient you also need to add him in LLMAPIClientFactory.
You can add dependencies to your LLMClient in [pyproject.toml](pyproject.toml) also make sure you are adding a
matrix.flavor in [test.yml](.github%2Fworkflows%2Ftest.yml).
Raw data
{
"_id": null,
"home_page": "",
"name": "llm-client",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "",
"keywords": "",
"author": "",
"author_email": "Uri Peled <uripeled2@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/cc/99/4d86d1d1b7d66dd23639c82601ce7ec504ffbce9cbd71acf1794f6952973/llm_client-0.8.0.tar.gz",
"platform": null,
"description": "# LLM-Client-SDK\n[](https://github.com/uripeled2/llm-client-sdk/actions/workflows/test.yml)\n[](https://opensource.org/licenses/MIT)\n\nLLM-Client-SDK is an SDK for seamless integration with generative AI large language models\n(We currently support - OpenAI, Google, AI21, HuggingfaceHub, Aleph Alpha, Anthropic,\nLocal models with transformers - and many more soon).\n\nOur vision is to provide async native and production ready SDK while creating \na powerful and fast integration with different LLM without letting the user lose \nany flexibility (API params, endpoints etc.). *We also provide sync version, see\nmore details below in Usage section.\n\n## Base Interface\nThe package exposes two simple interfaces for seamless integration with LLMs (In the future, we \nwill expand the interface to support more tasks like list models, edits, etc.):\n```python\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass, field\nfrom typing import Any, Optional\nfrom enum import Enum\nfrom dataclasses_json import dataclass_json, config\nfrom aiohttp import ClientSession\n\n\nclass BaseLLMClient(ABC):\n @abstractmethod\n async def text_completion(self, prompt: str, **kwargs) -> list[str]:\n raise NotImplementedError()\n\n async def get_tokens_count(self, text: str, **kwargs) -> int:\n raise NotImplementedError()\n\n\nclass Role(Enum):\n SYSTEM = \"system\"\n USER = \"user\"\n ASSISTANT = \"assistant\"\n\n\n@dataclass_json\n@dataclass\nclass ChatMessage:\n role: Role = field(metadata=config(encoder=lambda role: role.value, decoder=Role))\n content: str\n name: Optional[str] = field(default=None, metadata=config(exclude=lambda name: name is None))\n example: bool = field(default=False, metadata=config(exclude=lambda _: True))\n \n\n@dataclass\nclass LLMAPIClientConfig:\n api_key: str\n session: ClientSession\n base_url: Optional[str] = None\n default_model: Optional[str] = None\n headers: dict[str, Any] = field(default_factory=dict)\n\n\nclass BaseLLMAPIClient(BaseLLMClient, ABC):\n def __init__(self, config: LLMAPIClientConfig):\n ...\n\n @abstractmethod\n async def text_completion(self, prompt: str, model: Optional[str] = None, max_tokens: int | None = None,\n temperature: Optional[float] = None, top_p: Optional[float] = None, **kwargs) -> list[str]:\n raise NotImplementedError()\n\n async def chat_completion(self, messages: list[ChatMessage], temperature: float = 0,\n max_tokens: int = 16, model: Optional[str] = None, **kwargs) -> list[str]:\n raise NotImplementedError()\n\n async def embedding(self, text: str, model: Optional[str] = None, **kwargs) -> list[float]:\n raise NotImplementedError()\n\n async def get_chat_tokens_count(self, messages: list[ChatMessage], **kwargs) -> int:\n raise NotImplementedError()\n```\n\n## Requirements\n\nPython 3.9+\n\n## Installation\nIf you are worried about the size of the package you can install only the clients you need,\nby default we install none of the clients.\n\nFor all current clients support\n```console\n$ pip install llm-client[all]\n```\nFor only the base interface and some light LLMs clients (AI21 and Aleph Alpha)\n```console\n$ pip install llm-client\n```\n### Optional Dependencies\nFor all current api clients support\n```console\n$ pip install llm-client[api]\n```\nFor only local client support\n```console\n$ pip install llm-client[local]\n```\nFor sync support\n```console\n$ pip install llm-client[sync]\n```\nFor only OpenAI support\n```console\n$ pip install llm-client[openai]\n```\nFor only HuggingFace support\n```console\n$ pip install llm-client[huggingface]\n```\n\n\n## Usage\n\nUsing OpenAI directly through OpenAIClient - Maximum control and best practice in production\n```python\nimport os\nfrom aiohttp import ClientSession\nfrom llm_client import ChatMessage, Role, OpenAIClient, LLMAPIClientConfig\n\nOPENAI_API_KEY = os.environ[\"API_KEY\"]\nOPENAI_ORG_ID = os.getenv(\"ORG_ID\")\n\n\nasync def main():\n async with ClientSession() as session:\n llm_client = OpenAIClient(LLMAPIClientConfig(OPENAI_API_KEY, session, default_model=\"text-davinci-003\",\n headers={\"OpenAI-Organization\": OPENAI_ORG_ID})) # The headers are optional\n text = \"This is indeed a test\"\n messages = [ChatMessage(role=Role.USER, content=\"Hello!\"),\n ChatMessage(role=Role.SYSTEM, content=\"Hi there! How can I assist you today?\")]\n\n print(\"number of tokens:\", await llm_client.get_tokens_count(text)) # 5\n print(\"number of tokens for chat completion:\", await llm_client.get_chat_tokens_count(messages, model=\"gpt-3.5-turbo\")) # 23\n print(\"generated chat:\", await llm_client.chat_completion(messages, model=\"gpt-3.5-turbo\")) # ['Hi there! How can I assist you today?']\n print(\"generated text:\", await llm_client.text_completion(text)) # [' string\\n\\nYes, this is a test string. Test strings are used to']\n print(\"generated embedding:\", await llm_client.embedding(text)) # [0.0023064255, -0.009327292, ...]\n```\nUsing LLMAPIClientFactory - Perfect if you want to move fast and to not handle the client session yourself\n```python\nimport os\nfrom llm_client import LLMAPIClientFactory, LLMAPIClientType\n\nOPENAI_API_KEY = os.environ[\"API_KEY\"]\n\n\nasync def main():\n async with LLMAPIClientFactory() as llm_api_client_factory:\n llm_client = llm_api_client_factory.get_llm_api_client(LLMAPIClientType.OPEN_AI,\n api_key=OPENAI_API_KEY,\n default_model=\"text-davinci-003\")\n\n await llm_client.text_completion(prompt=\"This is indeed a test\")\n await llm_client.text_completion(prompt=\"This is indeed a test\", max_tokens=50)\n\n \n# Or if you don't want to use async\nfrom llm_client import init_sync_llm_api_client\n\nllm_client = init_sync_llm_api_client(LLMAPIClientType.OPEN_AI, api_key=OPENAI_API_KEY,\n default_model=\"text-davinci-003\")\n\nllm_client.text_completion(prompt=\"This is indeed a test\")\nllm_client.text_completion(prompt=\"This is indeed a test\", max_tokens=50)\n```\nLocal model\n```python\nimport os\nfrom transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer\nfrom llm_client import LocalClientConfig, LocalClient\n\nasync def main():\n try:\n model = AutoModelForCausalLM.from_pretrained(os.environ[\"MODEL_NAME_OR_PATH\"])\n except ValueError:\n model = AutoModelForSeq2SeqLM.from_pretrained(os.environ[\"MODEL_NAME_OR_PATH\"])\n tokenizer = AutoTokenizer.from_pretrained(os.environ[\"MODEL_NAME_OR_PATH\"])\n llm_client = LocalClient(LocalClientConfig(model, tokenizer, os.environ[\"TENSORS_TYPE\"], os.environ[\"DEVICE\"]))\n\n await llm_client.text_completion(prompt=\"This is indeed a test\")\n await llm_client.text_completion(prompt=\"This is indeed a test\", max_tokens=50)\n\n\n# Or if you don't want to use async\nimport async_to_sync\n\ntry:\n model = AutoModelForCausalLM.from_pretrained(os.environ[\"MODEL_NAME_OR_PATH\"])\nexcept ValueError:\n model = AutoModelForSeq2SeqLM.from_pretrained(os.environ[\"MODEL_NAME_OR_PATH\"])\ntokenizer = AutoTokenizer.from_pretrained(os.environ[\"MODEL_NAME_OR_PATH\"])\nllm_client = LocalClient(LocalClientConfig(model, tokenizer, os.environ[\"TENSORS_TYPE\"], os.environ[\"DEVICE\"]))\n\nllm_client = async_to_sync.methods(llm_client)\n\nllm_client.text_completion(prompt=\"This is indeed a test\")\nllm_client.text_completion(prompt=\"This is indeed a test\", max_tokens=50)\n```\n\n## Contributing\n\nContributions are welcome! Please check out the todos below, and feel free to open issue or a pull request.\n\n### Todo\n*The list is unordered*\n\n- [x] Add support for more LLMs\n - [x] Anthropic\n - [x] Google\n - [ ] Cohere\n- [x] Add support for more functions via LLMs \n - [x] embeddings\n - [x] chat\n - [ ] list models\n - [ ] edits\n - [ ] more\n- [ ] Add contributing guidelines and linter\n- [ ] Create an easy way to run multiple LLMs in parallel with the same prompts\n- [x] Convert common models parameter\n - [x] temperature \n - [x] max_tokens\n - [x] top_p\n - [ ] more\n\n### Development\nTo install the package in development mode, run the following command:\n```console\n$ pip install -e \".[all,test]\"\n```\nTo run the tests, run the following command:\n```console\n$ pytest tests\n```\nIf you want to add a new LLMClient you need to implement BaseLLMClient or BaseLLMAPIClient.\n\nIf you are adding a BaseLLMAPIClient you also need to add him in LLMAPIClientFactory.\n\nYou can add dependencies to your LLMClient in [pyproject.toml](pyproject.toml) also make sure you are adding a\nmatrix.flavor in [test.yml](.github%2Fworkflows%2Ftest.yml). \n",
"bugtrack_url": null,
"license": "",
"summary": "SDK for using LLM",
"version": "0.8.0",
"project_urls": {
"Homepage": "https://github.com/uripeled2/llm-client-sdk"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e38acf780929374a49aa4e432878b7e94ed80c3a4ebf678163ebbfcad838409f",
"md5": "daebc50069bd79eae4342a3a0df39f54",
"sha256": "cd3109779fd69ef1b17ce227d1809c36113013a88d790b3069ab95cddf40e833"
},
"downloads": -1,
"filename": "llm_client-0.8.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "daebc50069bd79eae4342a3a0df39f54",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 17181,
"upload_time": "2023-07-22T20:30:29",
"upload_time_iso_8601": "2023-07-22T20:30:29.879886Z",
"url": "https://files.pythonhosted.org/packages/e3/8a/cf780929374a49aa4e432878b7e94ed80c3a4ebf678163ebbfcad838409f/llm_client-0.8.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cc994d86d1d1b7d66dd23639c82601ce7ec504ffbce9cbd71acf1794f6952973",
"md5": "fc86340cda87d1b139efc007191fba85",
"sha256": "a125fe0f107ef514163928ae0dd2ae97e9420e897c5e2ff03b57df94f094723e"
},
"downloads": -1,
"filename": "llm_client-0.8.0.tar.gz",
"has_sig": false,
"md5_digest": "fc86340cda87d1b139efc007191fba85",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 20689,
"upload_time": "2023-07-22T20:30:31",
"upload_time_iso_8601": "2023-07-22T20:30:31.568818Z",
"url": "https://files.pythonhosted.org/packages/cc/99/4d86d1d1b7d66dd23639c82601ce7ec504ffbce9cbd71acf1794f6952973/llm_client-0.8.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-22 20:30:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "uripeled2",
"github_project": "llm-client-sdk",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-client"
}