| Name | lloam JSON |
| Version |
0.1.2
JSON |
| download |
| home_page | https://github.com/LachlanGray/lloam |
| Summary | A fertile collection of primitives for building things with LLMs |
| upload_time | 2024-10-20 19:50:38 |
| maintainer | None |
| docs_url | None |
| author | Lachlan Gray |
| requires_python | None |
| license | None |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|

*Rich primitives for building with LLMs*
# Lloam 🌱
Lloam is a minimal prompting library offering a clean way to write prompts and manage their execution. Key features:
- **Parallel:** completions run concurrently
- **Lightweight:** only dependency is `openai`
- **Lloam prompts:** clean function syntax for inline prompts
## Usage
```
pip install lloam
```
Overview: [completions](#lloam-completions), [prompts](#lloam-prompts), [agents](#lloam-agents)
### Lloam Completions
`lloam.completion` is a simple and familiar way to generate completions. It returns a `Completion` object, which manages the token stream. Tokens are accumulated concurrently, meaning completions won't block your program until you acess their results (e.g. with `str()` or `print()`).
```python
from lloam import completion
# strings
prompt = "Snap, crackle, and"
who = completion(prompt, stop="!", model="gpt-3.5-turbo")
# lists
chunks = ["The capi", "tal of", " France ", "is", "?"]
capitol = completion(chunks, stop=[".", "!"])
messages = [
{"role": "system", "content": "You answer questions in haikus"},
{"role": "user", "content": "What's loam"}
]
poem = completion(messages)
# ...completions are running concurrently...
print(who) # pop
print(capitol) # The capital of France is Paris
print(poem) # Soil rich and robust,
# A blend of clay, sand, and silt,
# Perfect for planting.
```
### Lloam Prompts
Lloam prompts offer a clean templating syntax you can use to write more complex prompts inline. The language model fills the `[holes]`, while `{variables}` are substituted into the prompt. Lloam prompts run concurrently just like completions, under the hood they are managing a sequence of Completions.
```python
import lloam
@lloam.prompt(model="gpt-3.5-turbo")
def group_name(x, n=5):
"""
One kind of {x} is a [name].
{n} {name}s makes a [group_name].
"""
animal = group_name("domestic animal")
print("This prints immediately!")
# access variables later
print(animal.name) # dog
print(animal.group_name) # pack
```
You can also inspect the live state of a prompt with `.inspect()`:
```python
musician_type = group_name("musician", n=3)
import time
for _ in range(3):
print(musician_type.inspect())
print("---")
time.sleep(0.5)
print(musician_type.name)
print(musician_type.group_name)
# output:
# One kind of musician is a [ ... ].
# 3 [ ... ]s makes a [ ].
# ---
# One kind of musician is a singer-songwriter.
# 3 singer-songwriters makes a [ ... ].
# ---
# One kind of musician is a singer-songwriter.
# 3 singer-songwriters makes a trio.
# ---
# singer-songwriter
# trio
```
### Lloam Agents
Lloam encourages you to think of an agent as a datastructure around language. Here's how you could make a RAG Agent that has
- a chat history
- a database
- a context for retrieved artifacts
You can see another example in `examples/shell_agent.py`. More stuff on agents coming soon!
```python
import lloam
class RagAgent:
def __init__(self, db):
self.db = db
self.history = []
self.artifacts = {}
def ask(self, question):
self.history.append({"role": "user", "content": question})
results = self.db.query(question)
self.artifacts.update(results)
answer = self.answer(question)
self.history.append({"role": "assistant", "content": answer.answer})
return {
"answer": answer.answer
"followup": answer.followup
}
@lloam.prompt
def answer(self, question):
"""
{self.artifacts}
---
{self.history}
user: {question}
[answer]
What would be a good followup question?
[followup]
"""
```
Raw data
{
"_id": null,
"home_page": "https://github.com/LachlanGray/lloam",
"name": "lloam",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Lachlan Gray",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/2b/2b/8de2668a2fbdb8d029cf9a244284e7bccb9faa7545f7aca899dfc04e28e0/lloam-0.1.2.tar.gz",
"platform": null,
"description": "\n*Rich primitives for building with LLMs*\n# Lloam \ud83c\udf31\nLloam is a minimal prompting library offering a clean way to write prompts and manage their execution. Key features:\n\n- **Parallel:** completions run concurrently\n- **Lightweight:** only dependency is `openai`\n- **Lloam prompts:** clean function syntax for inline prompts\n\n\n## Usage\n\n```\npip install lloam\n```\n\nOverview: [completions](#lloam-completions), [prompts](#lloam-prompts), [agents](#lloam-agents)\n\n### Lloam Completions\n\n`lloam.completion` is a simple and familiar way to generate completions. It returns a `Completion` object, which manages the token stream. Tokens are accumulated concurrently, meaning completions won't block your program until you acess their results (e.g. with `str()` or `print()`).\n\n```python\nfrom lloam import completion\n\n\n# strings\nprompt = \"Snap, crackle, and\"\nwho = completion(prompt, stop=\"!\", model=\"gpt-3.5-turbo\")\n\n# lists\nchunks = [\"The capi\", \"tal of\", \" France \", \"is\", \"?\"]\ncapitol = completion(chunks, stop=[\".\", \"!\"])\n\nmessages = [\n {\"role\": \"system\", \"content\": \"You answer questions in haikus\"},\n {\"role\": \"user\", \"content\": \"What's loam\"}\n]\npoem = completion(messages)\n\n# ...completions are running concurrently...\n\nprint(who) # pop\nprint(capitol) # The capital of France is Paris\nprint(poem) # Soil rich and robust,\n # A blend of clay, sand, and silt,\n # Perfect for planting.\n```\n\n### Lloam Prompts\nLloam prompts offer a clean templating syntax you can use to write more complex prompts inline. The language model fills the `[holes]`, while `{variables}` are substituted into the prompt. Lloam prompts run concurrently just like completions, under the hood they are managing a sequence of Completions.\n\n```python\nimport lloam\n\n@lloam.prompt(model=\"gpt-3.5-turbo\")\ndef group_name(x, n=5):\n \"\"\"\n One kind of {x} is a [name].\n\n {n} {name}s makes a [group_name].\n \"\"\"\n\n\nanimal = group_name(\"domestic animal\")\nprint(\"This prints immediately!\")\n\n# access variables later\nprint(animal.name) # dog\nprint(animal.group_name) # pack\n```\n\nYou can also inspect the live state of a prompt with `.inspect()`:\n\n```python\nmusician_type = group_name(\"musician\", n=3)\n\nimport time\nfor _ in range(3):\n print(musician_type.inspect())\n print(\"---\")\n time.sleep(0.5)\n\nprint(musician_type.name)\nprint(musician_type.group_name)\n\n# output:\n\n# One kind of musician is a [ ... ].\n\n# 3 [ ... ]s makes a [ ].\n# ---\n# One kind of musician is a singer-songwriter.\n\n# 3 singer-songwriters makes a [ ... ].\n# ---\n# One kind of musician is a singer-songwriter.\n\n# 3 singer-songwriters makes a trio.\n# ---\n# singer-songwriter\n# trio\n```\n\n### Lloam Agents\nLloam encourages you to think of an agent as a datastructure around language. Here's how you could make a RAG Agent that has \n- a chat history\n\n- a database\n\n- a context for retrieved artifacts\n\nYou can see another example in `examples/shell_agent.py`. More stuff on agents coming soon!\n\n```python\nimport lloam\n\nclass RagAgent:\n def __init__(self, db):\n self.db = db\n self.history = []\n self.artifacts = {}\n\n def ask(self, question):\n self.history.append({\"role\": \"user\", \"content\": question})\n\n results = self.db.query(question)\n self.artifacts.update(results)\n\n answer = self.answer(question)\n\n self.history.append({\"role\": \"assistant\", \"content\": answer.answer})\n\n return {\n \"answer\": answer.answer\n \"followup\": answer.followup\n }\n\n\n @lloam.prompt\n def answer(self, question):\n \"\"\"\n {self.artifacts}\n ---\n {self.history}\n\n user: {question}\n\n [answer]\n\n What would be a good followup question?\n [followup]\n \"\"\"\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "A fertile collection of primitives for building things with LLMs",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/LachlanGray/lloam"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9499d0fde83a0c0c8e6253553f13500d31f264b53f3f1933c7f793dd939caafb",
"md5": "53656f7ce7c603a2144fbcb01e456558",
"sha256": "a53c8e2ae7705d89ff0111f08a91933ec0fc36226990d6ea5cc2de58b322392d"
},
"downloads": -1,
"filename": "lloam-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "53656f7ce7c603a2144fbcb01e456558",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 10943,
"upload_time": "2024-10-20T19:50:37",
"upload_time_iso_8601": "2024-10-20T19:50:37.334516Z",
"url": "https://files.pythonhosted.org/packages/94/99/d0fde83a0c0c8e6253553f13500d31f264b53f3f1933c7f793dd939caafb/lloam-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2b2b8de2668a2fbdb8d029cf9a244284e7bccb9faa7545f7aca899dfc04e28e0",
"md5": "ec72d44a357697e85ea874b433a0f46f",
"sha256": "8893c3b061d1ff4d5024631348227a3b259ecf0a430fa46604aba3ee18f4d4c8"
},
"downloads": -1,
"filename": "lloam-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "ec72d44a357697e85ea874b433a0f46f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 10820,
"upload_time": "2024-10-20T19:50:38",
"upload_time_iso_8601": "2024-10-20T19:50:38.950511Z",
"url": "https://files.pythonhosted.org/packages/2b/2b/8de2668a2fbdb8d029cf9a244284e7bccb9faa7545f7aca899dfc04e28e0/lloam-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-20 19:50:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LachlanGray",
"github_project": "lloam",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "lloam"
}