fastllm


Namefastllm JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/clemens33/fastllm
SummaryFast and easy wrapper around LLMs.
upload_time2023-12-17 16:35:00
maintainer
docs_urlNone
authorClemens Kriechbaumer
requires_python>=3.10,<4.0
licenseApache-2.0
keywords agents chatbots openai llm ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # FastLLM

Fast and simple wrapper around LLMs. The package aims to be simply, precise and allows for fast prototyping of agents and applications around LLMs. At the moment focus around OpenAI's chat models.

**Warning - experimental package and subject to change.** For features and plans see the [roadmap](#roadmap).

## Installation

```bash
pip install fastllm
```

## [Samples](./samples)

Require an openai api key in `OPENAI_API_KEY` environment variable or `.env` file.

```bash
export OPENAI_API_KEY=...
```

### Agents

```python
from fastllm import Agent

find_cities = Agent("List {{ n }} cities comma separated in {{ country }}.")
cities = find_cities(n=3, country="Austria").split(",")

print(cities)
```

```bash
['Vienna', 'Salzburg', 'Graz']
```

```python
from fastllm import Agent, Message, Model, Prompt, Role

creative_name_finder = Agent(
    Message("You are an expert name finder.", Role.SYSTEM),
    Prompt("Find {{ n }} names.", temperature=2.0),
    Prompt("Print names comma separated, nothing else!"),
    model=Model(name="gpt-4"),
)

names = creative_name_finder(n=3).split(",")

print(names)
```

```bash
['Ethan Gallagher, Samantha Cheng, Max Thompson']
```

#### Functions

Functions can be added to Agents, Models or Prompts. Either as initial arguments or as decorator. Functions type hints, documentation and name are inferred from the function and added to the model call.

```python
from typing import Literal

from fastllm import Agent, Prompt

calculator_agent = Agent(
    Prompt("Calculate the result for task: {{ task }}"),
    Prompt("Only give the result number as result without anything else!"),
)

@calculator_agent.function
def calculator(a, b, operator: Literal["+", "-", "*", "/"]):
    """A basic calculator using various operators."""

    match operator:
        case "+":
            return a + b
        case "-":
            return a - b
        case "*":
            return a * b
        case "/":
            return a / b
        case _:
            raise ValueError(f"Unknown operator {operator}")


result = calculator_agent(task="give the final result for (11 + 14) * (6 - 2)")

print(result)
```

```bash
100
```

```python
another_result = calculator_agent(
    task="If I have 114 apples and 3 elephants, how many apples will each elephant get?"
)

print(another_result)
```

```bash
38
```

#### Avoid words/phrases 

Avoid/ban word and phrases - supports patterns. Patterns follow regex syntax but do not support all features. If the number of possible strings matching the pattern is too large, the pattern is ignored. 

For avoiding/banning words typically it is advised to put a [blank space](https://community.openai.com/t/reproducible-gpt-3-5-turbo-logit-bias-100-not-functioning/88293/8) in front of the word.

```python
cat = Agent(
    Prompt("Say Cat!"),
)

print(cat())
```

```bash
Cat!
```

No we avoid/ban the regex pattern `r"[ ]?Cat"` (e.g. " Cat" or "Cat") from the response.

```python
not_cat = Agent(
    Prompt("Say Cat!", avoid=r"[ ]?Cat"),
)

print(not_cat())
```

OpenAI is making fun of us (that really happened!) - obviously we need to be more specific (e.g. ban lowercase and uppercase)

```bash
Dog! Just kidding, cat!
```

Ok let's try again.

```python
seriously_not_a_cat = Agent(
    Prompt("Say Cat!, PLEEASSEE", avoid=r"[ ]?[Cc][aA][tT]]"),
)

print(seriously_not_a_cat())
```

Well no cat but kudos for the effort.

```bash
Sure, here you go: "Meow! "
```

#### Prefer words/phrases

Prefer words/phrases - supports patterns. Patterns follow regex syntax but do not support all features. Only supports pattern matching a limited number of strings. The max token length is set to the longest possible string in the pattern. Also the order of token can not be guaranteed. 

```python
meow = Agent(
    Prompt("Say Hi!", prefer="Meow!"),
)

print(meow())
```

```bash
Meow!
```

```python
austria_wins = Agent(
    Prompt("Predict the score for Austria against Germany.", prefer=r"Austria: [3-4], Germany: [0-1]"),
)

print(austria_wins())
```

Order of tokens is not guaranteed (and this is obviously false ;))

```bash
Austria, 1: Germany, 3
```

```python
meow = Agent(
    Prompt("Say Hi!", prefer="Meow!", max_tokens=10),
)

print(meow())
```

Our model can only say "Meow" or "!" up to 10 tokens.

```bash
Meow!!!!Meow!Meow!Meow!Meow!Meow
```

## Roadmap

### Features

- [x] Prompts using jinja2 templates
- [x] LLM calling with backoff and retry
- [x] Able to register functions to agents, models and prompts using decorators
- [x] Possible to register functions on multiple levels (agent, model, prompt). The function call is only available on the level it was registered.
- [x] Conversation history. The Model class keeps track of the conversation history.
- [x] Function schema is inferred from python function type hints, documentation and name
- [x] Function calling is handled by the Model class itself. Meaning if a LLM response indicate a function call, the Model class will call the function and return the result back to the LLM
- [x] Streaming with function calling
- [ ] Function calling can result in an infinite loop if LLM can not provide function name or arguments properly. This needs to be handled by the Model class.
- [ ] Force particular function call by providing function call argument
- [ ] Option to "smartly forget" conversation history in case context length is too long.
- [x] Prompts with pattern using logit bias to guide LLM completion.
- [ ] Able to switch between models (e.g. 3.5 and 4) within one agent over different prompts.
- [ ] Handling of multiple response messages from LLMs in a single call. At the moment only the first response is kept.
- [ ] Supporting non chat based LLMs (e.g. OpenAI's completion LLMs).
- [ ] Supporting other LLMs over APIs except OpenAI's. (e.g. Google etc.)
- [ ] Supporting local LLMs (e.g. llama-1, llama-2, etc.)

### Package

- [x] Basic package structure and functionality
- [x] Test cases and high test coverage
- [ ] Mock implementation of OpenAI's API for tests
- [ ] Tests against multiple python versions
- [ ] 100% test coverage (at the moment around 90%)
- [ ] Better documentation including readthedocs site.
- [ ] Better error handling and logging
- [ ] Better samples using jupyter notebooks
- [ ] Set up of pre-commit
- [ ] CI using github actions
- [ ] Release and versioning

## Development

Using [poetry](https://python-poetry.org/docs/#installation).

```bash
poetry install
```

### Tests

```bash
poetry run pytest
``` 

### Publish

```bash
poetry publish --build
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/clemens33/fastllm",
    "name": "fastllm",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10,<4.0",
    "maintainer_email": "",
    "keywords": "agents,chatbots,openai,llm,ai",
    "author": "Clemens Kriechbaumer",
    "author_email": "clemens.kriechbaumer@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/de/7e/7c7f52a2bed804f4349724853236f2fa41d2c530287d7b8e8b3a74cbee2d/fastllm-0.2.1.tar.gz",
    "platform": null,
    "description": "# FastLLM\n\nFast and simple wrapper around LLMs. The package aims to be simply, precise and allows for fast prototyping of agents and applications around LLMs. At the moment focus around OpenAI's chat models.\n\n**Warning - experimental package and subject to change.** For features and plans see the [roadmap](#roadmap).\n\n## Installation\n\n```bash\npip install fastllm\n```\n\n## [Samples](./samples)\n\nRequire an openai api key in `OPENAI_API_KEY` environment variable or `.env` file.\n\n```bash\nexport OPENAI_API_KEY=...\n```\n\n### Agents\n\n```python\nfrom fastllm import Agent\n\nfind_cities = Agent(\"List {{ n }} cities comma separated in {{ country }}.\")\ncities = find_cities(n=3, country=\"Austria\").split(\",\")\n\nprint(cities)\n```\n\n```bash\n['Vienna', 'Salzburg', 'Graz']\n```\n\n```python\nfrom fastllm import Agent, Message, Model, Prompt, Role\n\ncreative_name_finder = Agent(\n    Message(\"You are an expert name finder.\", Role.SYSTEM),\n    Prompt(\"Find {{ n }} names.\", temperature=2.0),\n    Prompt(\"Print names comma separated, nothing else!\"),\n    model=Model(name=\"gpt-4\"),\n)\n\nnames = creative_name_finder(n=3).split(\",\")\n\nprint(names)\n```\n\n```bash\n['Ethan Gallagher, Samantha Cheng, Max Thompson']\n```\n\n#### Functions\n\nFunctions can be added to Agents, Models or Prompts. Either as initial arguments or as decorator. Functions type hints, documentation and name are inferred from the function and added to the model call.\n\n```python\nfrom typing import Literal\n\nfrom fastllm import Agent, Prompt\n\ncalculator_agent = Agent(\n    Prompt(\"Calculate the result for task: {{ task }}\"),\n    Prompt(\"Only give the result number as result without anything else!\"),\n)\n\n@calculator_agent.function\ndef calculator(a, b, operator: Literal[\"+\", \"-\", \"*\", \"/\"]):\n    \"\"\"A basic calculator using various operators.\"\"\"\n\n    match operator:\n        case \"+\":\n            return a + b\n        case \"-\":\n            return a - b\n        case \"*\":\n            return a * b\n        case \"/\":\n            return a / b\n        case _:\n            raise ValueError(f\"Unknown operator {operator}\")\n\n\nresult = calculator_agent(task=\"give the final result for (11 + 14) * (6 - 2)\")\n\nprint(result)\n```\n\n```bash\n100\n```\n\n```python\nanother_result = calculator_agent(\n    task=\"If I have 114 apples and 3 elephants, how many apples will each elephant get?\"\n)\n\nprint(another_result)\n```\n\n```bash\n38\n```\n\n#### Avoid words/phrases \n\nAvoid/ban word and phrases - supports patterns. Patterns follow regex syntax but do not support all features. If the number of possible strings matching the pattern is too large, the pattern is ignored. \n\nFor avoiding/banning words typically it is advised to put a [blank space](https://community.openai.com/t/reproducible-gpt-3-5-turbo-logit-bias-100-not-functioning/88293/8) in front of the word.\n\n```python\ncat = Agent(\n    Prompt(\"Say Cat!\"),\n)\n\nprint(cat())\n```\n\n```bash\nCat!\n```\n\nNo we avoid/ban the regex pattern `r\"[ ]?Cat\"` (e.g. \" Cat\" or \"Cat\") from the response.\n\n```python\nnot_cat = Agent(\n    Prompt(\"Say Cat!\", avoid=r\"[ ]?Cat\"),\n)\n\nprint(not_cat())\n```\n\nOpenAI is making fun of us (that really happened!) - obviously we need to be more specific (e.g. ban lowercase and uppercase)\n\n```bash\nDog! Just kidding, cat!\n```\n\nOk let's try again.\n\n```python\nseriously_not_a_cat = Agent(\n    Prompt(\"Say Cat!, PLEEASSEE\", avoid=r\"[ ]?[Cc][aA][tT]]\"),\n)\n\nprint(seriously_not_a_cat())\n```\n\nWell no cat but kudos for the effort.\n\n```bash\nSure, here you go: \"Meow! \"\n```\n\n#### Prefer words/phrases\n\nPrefer words/phrases - supports patterns. Patterns follow regex syntax but do not support all features. Only supports pattern matching a limited number of strings. The max token length is set to the longest possible string in the pattern. Also the order of token can not be guaranteed. \n\n```python\nmeow = Agent(\n    Prompt(\"Say Hi!\", prefer=\"Meow!\"),\n)\n\nprint(meow())\n```\n\n```bash\nMeow!\n```\n\n```python\naustria_wins = Agent(\n    Prompt(\"Predict the score for Austria against Germany.\", prefer=r\"Austria: [3-4], Germany: [0-1]\"),\n)\n\nprint(austria_wins())\n```\n\nOrder of tokens is not guaranteed (and this is obviously false ;))\n\n```bash\nAustria, 1: Germany, 3\n```\n\n```python\nmeow = Agent(\n    Prompt(\"Say Hi!\", prefer=\"Meow!\", max_tokens=10),\n)\n\nprint(meow())\n```\n\nOur model can only say \"Meow\" or \"!\" up to 10 tokens.\n\n```bash\nMeow!!!!Meow!Meow!Meow!Meow!Meow\n```\n\n## Roadmap\n\n### Features\n\n- [x] Prompts using jinja2 templates\n- [x] LLM calling with backoff and retry\n- [x] Able to register functions to agents, models and prompts using decorators\n- [x] Possible to register functions on multiple levels (agent, model, prompt). The function call is only available on the level it was registered.\n- [x] Conversation history. The Model class keeps track of the conversation history.\n- [x] Function schema is inferred from python function type hints, documentation and name\n- [x] Function calling is handled by the Model class itself. Meaning if a LLM response indicate a function call, the Model class will call the function and return the result back to the LLM\n- [x] Streaming with function calling\n- [ ] Function calling can result in an infinite loop if LLM can not provide function name or arguments properly. This needs to be handled by the Model class.\n- [ ] Force particular function call by providing function call argument\n- [ ] Option to \"smartly forget\" conversation history in case context length is too long.\n- [x] Prompts with pattern using logit bias to guide LLM completion.\n- [ ] Able to switch between models (e.g. 3.5 and 4) within one agent over different prompts.\n- [ ] Handling of multiple response messages from LLMs in a single call. At the moment only the first response is kept.\n- [ ] Supporting non chat based LLMs (e.g. OpenAI's completion LLMs).\n- [ ] Supporting other LLMs over APIs except OpenAI's. (e.g. Google etc.)\n- [ ] Supporting local LLMs (e.g. llama-1, llama-2, etc.)\n\n### Package\n\n- [x] Basic package structure and functionality\n- [x] Test cases and high test coverage\n- [ ] Mock implementation of OpenAI's API for tests\n- [ ] Tests against multiple python versions\n- [ ] 100% test coverage (at the moment around 90%)\n- [ ] Better documentation including readthedocs site.\n- [ ] Better error handling and logging\n- [ ] Better samples using jupyter notebooks\n- [ ] Set up of pre-commit\n- [ ] CI using github actions\n- [ ] Release and versioning\n\n## Development\n\nUsing [poetry](https://python-poetry.org/docs/#installation).\n\n```bash\npoetry install\n```\n\n### Tests\n\n```bash\npoetry run pytest\n``` \n\n### Publish\n\n```bash\npoetry publish --build\n```\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Fast and easy wrapper around LLMs.",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/clemens33/fastllm",
        "Repository": "https://github.com/clemens33/fastllm"
    },
    "split_keywords": [
        "agents",
        "chatbots",
        "openai",
        "llm",
        "ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a4dd9a80caefbfd27e7c462c3e27c6ebeac30e842990a0a6c8b369820d34e2f2",
                "md5": "8e22618fb54989d956926e409ed361af",
                "sha256": "b2b4521ec4cb8adf7854ac76a27cf79f2c76a86c02b6d9c8223a3bd259bd8121"
            },
            "downloads": -1,
            "filename": "fastllm-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8e22618fb54989d956926e409ed361af",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10,<4.0",
            "size": 14214,
            "upload_time": "2023-12-17T16:34:57",
            "upload_time_iso_8601": "2023-12-17T16:34:57.010970Z",
            "url": "https://files.pythonhosted.org/packages/a4/dd/9a80caefbfd27e7c462c3e27c6ebeac30e842990a0a6c8b369820d34e2f2/fastllm-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "de7e7c7f52a2bed804f4349724853236f2fa41d2c530287d7b8e8b3a74cbee2d",
                "md5": "24e31d23bea055b496f246dc7ae6cabb",
                "sha256": "d32f6d4ced211b1c21dda45fad7bcd00bf82c5cd56873cd1fa945c1dcf626ba1"
            },
            "downloads": -1,
            "filename": "fastllm-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "24e31d23bea055b496f246dc7ae6cabb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10,<4.0",
            "size": 16044,
            "upload_time": "2023-12-17T16:35:00",
            "upload_time_iso_8601": "2023-12-17T16:35:00.030783Z",
            "url": "https://files.pythonhosted.org/packages/de/7e/7c7f52a2bed804f4349724853236f2fa41d2c530287d7b8e8b3a74cbee2d/fastllm-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-17 16:35:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "clemens33",
    "github_project": "fastllm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "fastllm"
}
        
Elapsed time: 0.17367s