# OS (Open Source) LLM Agents
A library that helps to build LLM agents based on open-source models from Huggingface Hub.
## Installation
### From source
Run in the root dir of this repo:
```
pip install .
```
## Example usage
Import needed packages:
```python
from os_llm_agents.models import CustomLLM
from os_llm_agents.executors import AgentExecutor
import torch
from transformers import pipeline, BitsAndBytesConfig
```
Optional: initialize quantization config
```python
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
```
Initialize the model:
```python
llm = CustomLLM(
model_name="meta-llama/Meta-Llama-3-8B-Instruct",
quantization_config=quantization_config
)
```
Define the tool:
```python
def multiply(**kwargs) -> int:
"""Multiply two integers together."""
n1, n2 = kwargs["n1"], kwargs["n2"]
return n1 * n2
multiply_tool = {
"name": "multiply",
"description": "Multiply two numbers",
"parameters": {
"type": "object",
"properties": {
"n1": {
"type": "int",
"description": "Number one",
},
"n2": {
"type": "int",
"description": "Number two",
},
},
"required": ["n1", "n2"],
},
"implementation": multiply, # Attach the function implementation
}
```
Initialize the AgentExecutor:
```python
executor = AgentExecutor(llm=llm,
tools=[multiply_tool],
system_prompt="You are helpful assistant")
```
Run the agent:
```python
chat_history = None
result = executor.invoke("What can you do for me?")
chat_history = result["chat_history"]
print("Response: ", result["response"].content)
>>> Response: I'm a helpful assistant! I can help you with a variety of tasks. I have access to a function called "multiply" that allows me to multiply two numbers. I can also provide information and answer questions to the best of my knowledge. If you need help with something specific, feel free to ask!
result = executor.invoke("Multiply 12 by 12", chat_history)
chat_history = result["chat_history"]
print("Response: ", result["response"].content)
>>> Response: 144
print(len(chat_history))
>>> 5
```
Raw data
{
"_id": null,
"home_page": "https://github.com/WSE-research/open-source-llm-agents",
"name": "os-llm-agents",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm agents huggingface transformers",
"author": "Aleksandr Perevalov, Andreas Both",
"author_email": "aleksandr.perevalov@htwk-leipzig.de,andreas.both@htwk-leipzig.de",
"download_url": "https://files.pythonhosted.org/packages/7e/54/112551c7a67d15698dde369f6fdb096378a12eae4ff90003bbb6bbb6db6c/os-llm-agents-0.0.1.tar.gz",
"platform": null,
"description": "# OS (Open Source) LLM Agents\n\nA library that helps to build LLM agents based on open-source models from Huggingface Hub.\n\n## Installation\n\n### From source\n\nRun in the root dir of this repo:\n```\npip install .\n```\n\n## Example usage\n\nImport needed packages:\n\n```python\nfrom os_llm_agents.models import CustomLLM\nfrom os_llm_agents.executors import AgentExecutor\n\nimport torch\nfrom transformers import pipeline, BitsAndBytesConfig\n```\n\nOptional: initialize quantization config\n\n```python\nquantization_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=torch.bfloat16,\n bnb_4bit_use_double_quant=True,\n)\n```\n\nInitialize the model:\n\n```python\nllm = CustomLLM(\n model_name=\"meta-llama/Meta-Llama-3-8B-Instruct\",\n quantization_config=quantization_config\n)\n```\n\nDefine the tool:\n\n```python\ndef multiply(**kwargs) -> int:\n \"\"\"Multiply two integers together.\"\"\"\n n1, n2 = kwargs[\"n1\"], kwargs[\"n2\"]\n return n1 * n2\n \nmultiply_tool = {\n \"name\": \"multiply\",\n \"description\": \"Multiply two numbers\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"n1\": {\n \"type\": \"int\",\n \"description\": \"Number one\",\n },\n \"n2\": {\n \"type\": \"int\",\n \"description\": \"Number two\",\n },\n },\n \"required\": [\"n1\", \"n2\"],\n },\n \"implementation\": multiply, # Attach the function implementation\n}\n```\n\nInitialize the AgentExecutor:\n\n```python\nexecutor = AgentExecutor(llm=llm,\n tools=[multiply_tool],\n system_prompt=\"You are helpful assistant\")\n```\n\nRun the agent:\n\n```python\nchat_history = None\n\nresult = executor.invoke(\"What can you do for me?\")\n\nchat_history = result[\"chat_history\"]\nprint(\"Response: \", result[\"response\"].content)\n\n>>> Response: I'm a helpful assistant! I can help you with a variety of tasks. I have access to a function called \"multiply\" that allows me to multiply two numbers. I can also provide information and answer questions to the best of my knowledge. If you need help with something specific, feel free to ask!\n\nresult = executor.invoke(\"Multiply 12 by 12\", chat_history)\n\nchat_history = result[\"chat_history\"]\nprint(\"Response: \", result[\"response\"].content)\n\n>>> Response: 144\n\nprint(len(chat_history))\n\n>>> 5\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python package that helps to build LLM agents based on open-source models from Huggingface Hub.",
"version": "0.0.1",
"project_urls": {
"Homepage": "https://github.com/WSE-research/open-source-llm-agents"
},
"split_keywords": [
"llm",
"agents",
"huggingface",
"transformers"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9e72f3af9b928d3a5ed5837ca89155fa2c6db2b90765b77d445eedacbaadd28e",
"md5": "2fd36b658f0e35cac796e8e5f762c429",
"sha256": "56c7edf8d639f8079699ef39d6addc4272c3e7a45373e95ab0679c50df4ebd24"
},
"downloads": -1,
"filename": "os_llm_agents-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2fd36b658f0e35cac796e8e5f762c429",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 6126,
"upload_time": "2024-10-07T14:53:52",
"upload_time_iso_8601": "2024-10-07T14:53:52.644441Z",
"url": "https://files.pythonhosted.org/packages/9e/72/f3af9b928d3a5ed5837ca89155fa2c6db2b90765b77d445eedacbaadd28e/os_llm_agents-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7e54112551c7a67d15698dde369f6fdb096378a12eae4ff90003bbb6bbb6db6c",
"md5": "92e4490d270a69eb3f0d036fa4dc8419",
"sha256": "40e524fac44171d5716b23bf883200ad5c8a5e310fa79c4e5b6f6155db8c4dbd"
},
"downloads": -1,
"filename": "os-llm-agents-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "92e4490d270a69eb3f0d036fa4dc8419",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 5634,
"upload_time": "2024-10-07T14:53:55",
"upload_time_iso_8601": "2024-10-07T14:53:55.530316Z",
"url": "https://files.pythonhosted.org/packages/7e/54/112551c7a67d15698dde369f6fdb096378a12eae4ff90003bbb6bbb6db6c/os-llm-agents-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-07 14:53:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "WSE-research",
"github_project": "open-source-llm-agents",
"github_not_found": true,
"lcname": "os-llm-agents"
}