lambdaprompt


Namelambdaprompt JSON
Version 0.6.0 PyPI version JSON
download
home_page
SummaryA functional programming interface for building AI systems
upload_time2024-01-18 16:15:18
maintainer
docs_urlNone
author
requires_python>=3.7
licenseMIT
keywords nlp ai functional composition prompt apply chain machine
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![](https://dcbadge.vercel.app/api/server/kW9nBQErGe?compact=true&style=flat)](https://discord.gg/kW9nBQErGe)

# λprompt - Build, compose and call templated LLM prompts!

Write LLM prompts with jinja templates, compose them in python as functions, and call them directly or use them as a webservice!

We believe that large language model prompts are a lot like "functions" in a programming sense and would benefit greatly by the power of an interpreted language. lambdaprompt is a library to offer an interface to back that belief up. This library allows for building full large language model based "prompt machines", including ones that self-edit to correct and even self-write their own execution code. 

`pip install lambdaprompt`

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/bluecoconut/bc5925d0de83b478852f5457ef8060ad/example-prompt.ipynb)

[A webserver (built on `FastAPI`) example repository](https://github.com/approximatelabs/example-lambdaprompt-server)

## Environment variables for using hosted models

For using openAI, set up API keys as environment variables or set after importing (also easy to just make a `.env` file, since this uses `dotenv` package)

`OPENAI_API_KEY=...`

## Creating a prompt

Prompts use JINJA templating to create a string, the string is passed to the LLM for completion.

```python
from lambdaprompt import GPT3Prompt

example = GPT3Prompt("Sally had {{ number }} of {{ thing }}. Sally sold ")
# then use it as a function
example(number=12, thing="apples")
```


## Creating ChatGPT3 Conversational prompts

Each prompt can be thought of as a parameterizable conversation, and executing the prompt with an input will apply that as "the next line of conversation" and then generate the response. 

In order to update the memory state of the prompt, call the `.add()` method on the prompt, which can be used to add steps to a conversation and make the prompt "remember" what has been said.

```python
>>> import lambdaprompt as lp

>>> convo = lp.AsyncGPT3Chat([{'system': 'You are a {{ type_of_bot }}'}])
>>> await convo("What should we get for lunch?", type_of_bot="pirate")
As a pirate, I would suggest we have some hearty seafood such as fish and chips or a seafood platter. We could also have some rum to wash it down! Arrr!
```
## General prompt creation

You can also turn any function into a prompt (useful for composing prompts, or creating programs out of prompts. This is commonly called "prompt chaining". See how you can achieve this with simple python composition.
```python
from lambdaprompt import prompt, GPT3Prompt

generate_n_tasks = GPT3Prompt("Today I will do {{ n }} things (comma separated) [", stop="]")
is_happy = GPT3Prompt("The task {{ task_detail }} is a task that will make me happy? (y/n):")

@prompt
def get_tasks_and_rate_is_happy(n=3):
    results = []
    for task in generate_n_tasks(n=n).split(","):
        results.append((task, is_happy(task)))
    return results

print(get_tasks_and_rate_is_happy())
```

## Async and Sync

Lambdaprompt works on both sync and async functions, and offers a sync and async templated prompt interface

```python
from lambdaprompt import GPT3Prompt, asyncGPT3Prompt

#sync
first = GPT3Prompt("Sally had {{ number }} of {{ thing }}. Sally sold ")
first(number=12, thing="apples")

#async
first = asyncGPT3Prompt("Sally had {{ number }} of {{ thing }}. Sally sold ")
await first(number=12, thing="apples")
```

```python
from lambdaprompt import prompt

@prompt
def sync_example(a):
    return a + "!"

sync_example("hello")

@prompt
async def async_example(a):
    return a + "!"

await async_example("hello")
```

### Some special properties

For templated prompts with only template variable, can directly call with the variable as positional argument (no need to define in kwarg)
```python
basic_qa = asyncGPT3Prompt("basic_qa", """What is the answer to the question [{{ question }}]?""")

await basic_qa("Is it safe to eat pizza with chopsticks?")
```


## Using lambdaprompt as a webservice
Simply `pip install lambdaprompt[server]` and then add `from lambdaprompt.server.main import app` to the top of your file!

make a file

`app.py`
````python
from lambdaprompt import AsyncGPT3Prompt, prompt
from lambdaprompt.server.main import app

AsyncGPT3Prompt(
    """Rewrite the following as a {{ target_author }}. 
```
{{ source_text }}
```
Output:
```
""",
    name="rewrite_as",
    stop="```",
)
````

Then run
```
uvicorn app:app --reload
```

browse to `http://localhost:8000/docs` to see the swagger docs generated for the prompts!

## Running inside docker

First, create an .env file with your OpenAI API key: (like `OPENAI_API_KEY=sk-dskj32094klsaj9024lkjsa`)

```
docker build . -t lambdaprompt:0.0.1
docker run -it --env-file .env lambdaprompt:0.0.1  bash -c "python two.py"
```

This will output something like this:

```
docker run -it --env-file .env lambdaprompt:0.0.1  bash -c "python two.py"
[('example: go for a walk', '\n\nYes. Going for a walk can be a great way to boost your mood and get some fresh air.'), (' read a book', '\n\nYes'), (' call a friend', '\n\nYes')]

docker run -it --env-file .env lambdaprompt:0.0.1  bash -c "python two.py"
[(' edit ', '\n\nNo. Editing can be a tedious and time-consuming task, so it may not necessarily make you happy.')]
```


## Design Patterns (TODO)
- Response Optimization
  - [Ideation, Scoring and Selection](link)
  - [Error Correcting Language Loops](link)
- Summarization and Aggregations
  - [Rolling](link)
  - [Fan-out-tree](link)
- [Meta-Prompting](link)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "lambdaprompt",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nlp,ai,functional,composition,prompt,apply,chain,machine",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/09/fd/87de443cc8dde8c933cb01f6a1299ffd797ec3cc1aefb5ec061989f9ab26/lambdaprompt-0.6.0.tar.gz",
    "platform": null,
    "description": "[![](https://dcbadge.vercel.app/api/server/kW9nBQErGe?compact=true&style=flat)](https://discord.gg/kW9nBQErGe)\n\n# \u03bbprompt - Build, compose and call templated LLM prompts!\n\nWrite LLM prompts with jinja templates, compose them in python as functions, and call them directly or use them as a webservice!\n\nWe believe that large language model prompts are a lot like \"functions\" in a programming sense and would benefit greatly by the power of an interpreted language. lambdaprompt is a library to offer an interface to back that belief up. This library allows for building full large language model based \"prompt machines\", including ones that self-edit to correct and even self-write their own execution code. \n\n`pip install lambdaprompt`\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/bluecoconut/bc5925d0de83b478852f5457ef8060ad/example-prompt.ipynb)\n\n[A webserver (built on `FastAPI`) example repository](https://github.com/approximatelabs/example-lambdaprompt-server)\n\n## Environment variables for using hosted models\n\nFor using openAI, set up API keys as environment variables or set after importing (also easy to just make a `.env` file, since this uses `dotenv` package)\n\n`OPENAI_API_KEY=...`\n\n## Creating a prompt\n\nPrompts use JINJA templating to create a string, the string is passed to the LLM for completion.\n\n```python\nfrom lambdaprompt import GPT3Prompt\n\nexample = GPT3Prompt(\"Sally had {{ number }} of {{ thing }}. Sally sold \")\n# then use it as a function\nexample(number=12, thing=\"apples\")\n```\n\n\n## Creating ChatGPT3 Conversational prompts\n\nEach prompt can be thought of as a parameterizable conversation, and executing the prompt with an input will apply that as \"the next line of conversation\" and then generate the response. \n\nIn order to update the memory state of the prompt, call the `.add()` method on the prompt, which can be used to add steps to a conversation and make the prompt \"remember\" what has been said.\n\n```python\n>>> import lambdaprompt as lp\n\n>>> convo = lp.AsyncGPT3Chat([{'system': 'You are a {{ type_of_bot }}'}])\n>>> await convo(\"What should we get for lunch?\", type_of_bot=\"pirate\")\nAs a pirate, I would suggest we have some hearty seafood such as fish and chips or a seafood platter. We could also have some rum to wash it down! Arrr!\n```\n## General prompt creation\n\nYou can also turn any function into a prompt (useful for composing prompts, or creating programs out of prompts. This is commonly called \"prompt chaining\". See how you can achieve this with simple python composition.\n```python\nfrom lambdaprompt import prompt, GPT3Prompt\n\ngenerate_n_tasks = GPT3Prompt(\"Today I will do {{ n }} things (comma separated) [\", stop=\"]\")\nis_happy = GPT3Prompt(\"The task {{ task_detail }} is a task that will make me happy? (y/n):\")\n\n@prompt\ndef get_tasks_and_rate_is_happy(n=3):\n    results = []\n    for task in generate_n_tasks(n=n).split(\",\"):\n        results.append((task, is_happy(task)))\n    return results\n\nprint(get_tasks_and_rate_is_happy())\n```\n\n## Async and Sync\n\nLambdaprompt works on both sync and async functions, and offers a sync and async templated prompt interface\n\n```python\nfrom lambdaprompt import GPT3Prompt, asyncGPT3Prompt\n\n#sync\nfirst = GPT3Prompt(\"Sally had {{ number }} of {{ thing }}. Sally sold \")\nfirst(number=12, thing=\"apples\")\n\n#async\nfirst = asyncGPT3Prompt(\"Sally had {{ number }} of {{ thing }}. Sally sold \")\nawait first(number=12, thing=\"apples\")\n```\n\n```python\nfrom lambdaprompt import prompt\n\n@prompt\ndef sync_example(a):\n    return a + \"!\"\n\nsync_example(\"hello\")\n\n@prompt\nasync def async_example(a):\n    return a + \"!\"\n\nawait async_example(\"hello\")\n```\n\n### Some special properties\n\nFor templated prompts with only template variable, can directly call with the variable as positional argument (no need to define in kwarg)\n```python\nbasic_qa = asyncGPT3Prompt(\"basic_qa\", \"\"\"What is the answer to the question [{{ question }}]?\"\"\")\n\nawait basic_qa(\"Is it safe to eat pizza with chopsticks?\")\n```\n\n\n## Using lambdaprompt as a webservice\nSimply `pip install lambdaprompt[server]` and then add `from lambdaprompt.server.main import app` to the top of your file!\n\nmake a file\n\n`app.py`\n````python\nfrom lambdaprompt import AsyncGPT3Prompt, prompt\nfrom lambdaprompt.server.main import app\n\nAsyncGPT3Prompt(\n    \"\"\"Rewrite the following as a {{ target_author }}. \n```\n{{ source_text }}\n```\nOutput:\n```\n\"\"\",\n    name=\"rewrite_as\",\n    stop=\"```\",\n)\n````\n\nThen run\n```\nuvicorn app:app --reload\n```\n\nbrowse to `http://localhost:8000/docs` to see the swagger docs generated for the prompts!\n\n## Running inside docker\n\nFirst, create an .env file with your OpenAI API key: (like `OPENAI_API_KEY=sk-dskj32094klsaj9024lkjsa`)\n\n```\ndocker build . -t lambdaprompt:0.0.1\ndocker run -it --env-file .env lambdaprompt:0.0.1  bash -c \"python two.py\"\n```\n\nThis will output something like this:\n\n```\ndocker run -it --env-file .env lambdaprompt:0.0.1  bash -c \"python two.py\"\n[('example: go for a walk', '\\n\\nYes. Going for a walk can be a great way to boost your mood and get some fresh air.'), (' read a book', '\\n\\nYes'), (' call a friend', '\\n\\nYes')]\n\ndocker run -it --env-file .env lambdaprompt:0.0.1  bash -c \"python two.py\"\n[(' edit ', '\\n\\nNo. Editing can be a tedious and time-consuming task, so it may not necessarily make you happy.')]\n```\n\n\n## Design Patterns (TODO)\n- Response Optimization\n  - [Ideation, Scoring and Selection](link)\n  - [Error Correcting Language Loops](link)\n- Summarization and Aggregations\n  - [Rolling](link)\n  - [Fan-out-tree](link)\n- [Meta-Prompting](link)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A functional programming interface for building AI systems",
    "version": "0.6.0",
    "project_urls": {
        "homepage": "https://github.com/approximatelabs/lambdaprompt"
    },
    "split_keywords": [
        "nlp",
        "ai",
        "functional",
        "composition",
        "prompt",
        "apply",
        "chain",
        "machine"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bc2ef2a217d91db30f146e3bd660dc4d42e1f3f082d4cd43a04dca1852df36da",
                "md5": "b0ab2aa3618c390f1bd66f3ee4a50cc0",
                "sha256": "1f70e2e02408ec8be14def4847b2ca3ec8595de891904b1119cf63fb101d0864"
            },
            "downloads": -1,
            "filename": "lambdaprompt-0.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b0ab2aa3618c390f1bd66f3ee4a50cc0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 14243,
            "upload_time": "2024-01-18T16:15:17",
            "upload_time_iso_8601": "2024-01-18T16:15:17.280374Z",
            "url": "https://files.pythonhosted.org/packages/bc/2e/f2a217d91db30f146e3bd660dc4d42e1f3f082d4cd43a04dca1852df36da/lambdaprompt-0.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "09fd87de443cc8dde8c933cb01f6a1299ffd797ec3cc1aefb5ec061989f9ab26",
                "md5": "f24e2dd210d1a8ef91066f8547a95be4",
                "sha256": "5242c4a4eeae022b9f18045d8ec3ce1b188f25c25390f700b4e37eb1177f556e"
            },
            "downloads": -1,
            "filename": "lambdaprompt-0.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "f24e2dd210d1a8ef91066f8547a95be4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 18156,
            "upload_time": "2024-01-18T16:15:18",
            "upload_time_iso_8601": "2024-01-18T16:15:18.978920Z",
            "url": "https://files.pythonhosted.org/packages/09/fd/87de443cc8dde8c933cb01f6a1299ffd797ec3cc1aefb5ec061989f9ab26/lambdaprompt-0.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-18 16:15:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "approximatelabs",
    "github_project": "lambdaprompt",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "lambdaprompt"
}
        
Elapsed time: 0.18977s