flowchat


Nameflowchat JSON
Version 1.3.3 PyPI version JSON
download
home_pagehttps://github.com/flatypus/flowchat
SummaryStreamlining the process of multi-prompting LLMs with chains
upload_time2024-06-24 13:27:40
maintainerHinson Chan
docs_urlNone
authorHinson Chan
requires_pythonNone
licenseMIT
keywords openai gpt3 gpt-3 gpt4 gpt-4 chatbot ai nlp prompt prompt-engineering toolkit
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # flowchat - clean, readable, logical code.

[![PyPI version](https://img.shields.io/pypi/v/flowchat.svg)](https://pypi.org/project/flowchat/)
[![License](https://img.shields.io/pypi/l/flowchat?logoColor=blue)](LICENSE.txt)
![Downloads](https://img.shields.io/pypi/dm/flowchat?logoColor=blue)

A Python library for building clean and efficient multi-step prompt chains. It is built on top of [OpenAI's Python API](https://github.com/openai/openai-python).

![why](https://github.com/flatypus/flowchat/assets/68029599/969968aa-6250-4cc1-bb73-2a0930270fbf)

## What is Flowchat?
Flowchat is designed around the idea of a *chain*. Start the chain with `.anchor()`, which contains a system prompt. Use `.link()` to add additional messages.

To get a response from the LLM, use `.pull()`. Additionally, you can use `.pull(json_schema={"city": "string"})` to define a specific output response schema. This will validate the response and return a JSON object with the response. The subsequent response will be stored in an internal response variable.

When you're done one stage of your chain, you can log the chain's messages and responses with `.log()` and reset the current chat conversation messages with `.unhook()`.
Unhooking **does not** reset the internal response variable. 

Instead, the idea of 'chaining' is that you can use the response from the previous stage in the next stage.
For example, when using `link` in the second stage, you can use the response from the first stage by using a lambda function: `.link(lambda response: f"Previous response: {response}")`. 

You can use `.transform()` to transform the response from the previous stage into something else. For example, you can use `.transform(lambda response: response["city"])` to get the city from the response JSON object, or even map over a response list with a nested chain! You'll see more ways to use these functions in the [examples](/examples/natural_language_cli.py).

When you're finished with the entire chain, simply use `.last()` to return the last response.

Check out these [example chains](/examples) to get started!

## Installation
```bash
pip install flowchat
```

## Setup
Put your OpenAI API key in your environment variable file (eg. .env) as `OPENAI_API_KEY=sk-xxxxxx`. If you're using this as part of another project with a different name for the key (like `OPENAI_KEY` or something), simply pass that in `Chain(environ_key="OPENAI_KEY")`. Alternatively, you can simply pass the key itself when initializing the chain: `Chain(api_key="sk-xxxxxx")`.

## Example Usage
```py
from flowchat import Chain

chain = (
    Chain(model="gpt-3.5-turbo")  # default model for all pull() calls
    .anchor("You are a historian.")  # Set the first system prompt
    .link("What is the capital of France?")
    .pull().log().unhook()  # Pull the response, log it, and reset prompts

    .link(lambda desc: f"Extract the city in this statement: {desc}")
    .pull(json_schema={"city": "string"})  # Pull the response and validate it
    .transform(lambda city_json: city_json["city"])  # Get city from JSON
    .log().unhook()

    .anchor("You are an expert storyteller.")
    .link(lambda city: f"Design a basic three-act point-form short story about {city}.")
    .link("How long should it be?", assistant=True)
    .link("Around 100 words.")  # (For example) you can make multiple links!
    .pull(max_tokens=512).log().unhook()

    .anchor("You are a novelist. Your job is to write a novel about a story that you have heard.")
    .link(lambda storyline: f"Briefly elaborate on the first act of the storyline: {storyline}")
    .pull(max_tokens=256, model="gpt-4-turbo").log().unhook()

    .link(lambda act: f"Summarize this act in around three words:\n{act}")
    .pull(model="gpt-4")
    .log_tokens()  # Log token usage of the whole chain
)

print(f"Result: {chain.last()}") # >> "Artist's Dream Ignites"
```

### Natural Language CLI:

This is the short version that doesn't check if the command is possible to start. If you want to see a longer example with **nested chains**, check out the [full version](/examples/natural_language_cli.py).

```py
from flowchat import Chain, autodedent
import os
import subprocess


def execute_system_command(command):
    try:
        result = subprocess.run(
            command, shell=True, check=True,
            stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
        )
        return result.stdout
    except subprocess.CalledProcessError as e:
        return e.stderr


def main():
    print("Welcome to the Natural Language Command Line Interface!")
    os_system_context = f"You are a shell interpreter assistant running on {os.name} operating system."

    while True:
        user_input = input("Please enter your command in natural language: ")

        should_exit = (
            Chain(model="gpt-3.5-turbo")
            .link(autodedent(
                "Does the user want to exit the CLI? Respond with 'YES' or 'NO'.",
                user_input
            )).pull(max_tokens=2).unhook().last()
        )

        if should_exit.lower() in ("yes", "y"):
            print("Exiting the CLI.")
            break

        # Feed the input to flowchat
        command_suggestion = (
            Chain(model="gpt-4-turbo")
            .anchor(os_system_context)
            .link(autodedent(
                "The user wants to do this: ",
                user_input,
                "Suggest a command that can achieve this in one line without user input or interaction."
            )).pull().unhook()

            .anchor(os_system_context)
            .link(lambda suggestion: autodedent(
                "Extract ONLY the command from this command desciption:",
                suggestion
            ))
            # define a JSON schema to extract the command from the suggestion
            .pull(json_schema={"command": "echo 'Hello World!'"})
            .transform(lambda command_json: command_json["command"])
            .unhook().last()
        )

        print(f"Suggested command: {command_suggestion}")

        # Execute the suggested command and get the result
        command_output = execute_system_command(command_suggestion)
        print(f"Command executed. Output:\n{command_output}")

        if command_output != "":
            description = (
                Chain(model="gpt-3.5-turbo").anchor(os_system_context)
                .link(f"Describe this output:\n{command_output}")
                .pull().unhook().last()
            )
            # Logging the description
            print(f"Explanation:\n{description}")

        print("=" * 60)


if __name__ == "__main__":
    main()
```

This project is under a MIT license.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/flatypus/flowchat",
    "name": "flowchat",
    "maintainer": "Hinson Chan",
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": "<yhc3141@gmail.com>",
    "keywords": "openai gpt3 gpt-3 gpt4 gpt-4 chatbot ai nlp prompt prompt-engineering toolkit",
    "author": "Hinson Chan",
    "author_email": "<yhc3141@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/ca/54/b9867fa8452480e60ca1531576e713ce50eef2d7762cdd17e1b7e7b70a95/flowchat-1.3.3.tar.gz",
    "platform": null,
    "description": "# flowchat - clean, readable, logical code.\n\n[![PyPI version](https://img.shields.io/pypi/v/flowchat.svg)](https://pypi.org/project/flowchat/)\n[![License](https://img.shields.io/pypi/l/flowchat?logoColor=blue)](LICENSE.txt)\n![Downloads](https://img.shields.io/pypi/dm/flowchat?logoColor=blue)\n\nA Python library for building clean and efficient multi-step prompt chains. It is built on top of [OpenAI's Python API](https://github.com/openai/openai-python).\n\n![why](https://github.com/flatypus/flowchat/assets/68029599/969968aa-6250-4cc1-bb73-2a0930270fbf)\n\n## What is Flowchat?\nFlowchat is designed around the idea of a *chain*. Start the chain with `.anchor()`, which contains a system prompt. Use `.link()` to add additional messages.\n\nTo get a response from the LLM, use `.pull()`. Additionally, you can use `.pull(json_schema={\"city\": \"string\"})` to define a specific output response schema. This will validate the response and return a JSON object with the response. The subsequent response will be stored in an internal response variable.\n\nWhen you're done one stage of your chain, you can log the chain's messages and responses with `.log()` and reset the current chat conversation messages with `.unhook()`.\nUnhooking **does not** reset the internal response variable. \n\nInstead, the idea of 'chaining' is that you can use the response from the previous stage in the next stage.\nFor example, when using `link` in the second stage, you can use the response from the first stage by using a lambda function: `.link(lambda response: f\"Previous response: {response}\")`. \n\nYou can use `.transform()` to transform the response from the previous stage into something else. For example, you can use `.transform(lambda response: response[\"city\"])` to get the city from the response JSON object, or even map over a response list with a nested chain! You'll see more ways to use these functions in the [examples](/examples/natural_language_cli.py).\n\nWhen you're finished with the entire chain, simply use `.last()` to return the last response.\n\nCheck out these [example chains](/examples) to get started!\n\n## Installation\n```bash\npip install flowchat\n```\n\n## Setup\nPut your OpenAI API key in your environment variable file (eg. .env) as `OPENAI_API_KEY=sk-xxxxxx`. If you're using this as part of another project with a different name for the key (like `OPENAI_KEY` or something), simply pass that in `Chain(environ_key=\"OPENAI_KEY\")`. Alternatively, you can simply pass the key itself when initializing the chain: `Chain(api_key=\"sk-xxxxxx\")`.\n\n## Example Usage\n```py\nfrom flowchat import Chain\n\nchain = (\n    Chain(model=\"gpt-3.5-turbo\")  # default model for all pull() calls\n    .anchor(\"You are a historian.\")  # Set the first system prompt\n    .link(\"What is the capital of France?\")\n    .pull().log().unhook()  # Pull the response, log it, and reset prompts\n\n    .link(lambda desc: f\"Extract the city in this statement: {desc}\")\n    .pull(json_schema={\"city\": \"string\"})  # Pull the response and validate it\n    .transform(lambda city_json: city_json[\"city\"])  # Get city from JSON\n    .log().unhook()\n\n    .anchor(\"You are an expert storyteller.\")\n    .link(lambda city: f\"Design a basic three-act point-form short story about {city}.\")\n    .link(\"How long should it be?\", assistant=True)\n    .link(\"Around 100 words.\")  # (For example) you can make multiple links!\n    .pull(max_tokens=512).log().unhook()\n\n    .anchor(\"You are a novelist. Your job is to write a novel about a story that you have heard.\")\n    .link(lambda storyline: f\"Briefly elaborate on the first act of the storyline: {storyline}\")\n    .pull(max_tokens=256, model=\"gpt-4-turbo\").log().unhook()\n\n    .link(lambda act: f\"Summarize this act in around three words:\\n{act}\")\n    .pull(model=\"gpt-4\")\n    .log_tokens()  # Log token usage of the whole chain\n)\n\nprint(f\"Result: {chain.last()}\") # >> \"Artist's Dream Ignites\"\n```\n\n### Natural Language CLI:\n\nThis is the short version that doesn't check if the command is possible to start. If you want to see a longer example with **nested chains**, check out the [full version](/examples/natural_language_cli.py).\n\n```py\nfrom flowchat import Chain, autodedent\nimport os\nimport subprocess\n\n\ndef execute_system_command(command):\n    try:\n        result = subprocess.run(\n            command, shell=True, check=True,\n            stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True\n        )\n        return result.stdout\n    except subprocess.CalledProcessError as e:\n        return e.stderr\n\n\ndef main():\n    print(\"Welcome to the Natural Language Command Line Interface!\")\n    os_system_context = f\"You are a shell interpreter assistant running on {os.name} operating system.\"\n\n    while True:\n        user_input = input(\"Please enter your command in natural language: \")\n\n        should_exit = (\n            Chain(model=\"gpt-3.5-turbo\")\n            .link(autodedent(\n                \"Does the user want to exit the CLI? Respond with 'YES' or 'NO'.\",\n                user_input\n            )).pull(max_tokens=2).unhook().last()\n        )\n\n        if should_exit.lower() in (\"yes\", \"y\"):\n            print(\"Exiting the CLI.\")\n            break\n\n        # Feed the input to flowchat\n        command_suggestion = (\n            Chain(model=\"gpt-4-turbo\")\n            .anchor(os_system_context)\n            .link(autodedent(\n                \"The user wants to do this: \",\n                user_input,\n                \"Suggest a command that can achieve this in one line without user input or interaction.\"\n            )).pull().unhook()\n\n            .anchor(os_system_context)\n            .link(lambda suggestion: autodedent(\n                \"Extract ONLY the command from this command desciption:\",\n                suggestion\n            ))\n            # define a JSON schema to extract the command from the suggestion\n            .pull(json_schema={\"command\": \"echo 'Hello World!'\"})\n            .transform(lambda command_json: command_json[\"command\"])\n            .unhook().last()\n        )\n\n        print(f\"Suggested command: {command_suggestion}\")\n\n        # Execute the suggested command and get the result\n        command_output = execute_system_command(command_suggestion)\n        print(f\"Command executed. Output:\\n{command_output}\")\n\n        if command_output != \"\":\n            description = (\n                Chain(model=\"gpt-3.5-turbo\").anchor(os_system_context)\n                .link(f\"Describe this output:\\n{command_output}\")\n                .pull().unhook().last()\n            )\n            # Logging the description\n            print(f\"Explanation:\\n{description}\")\n\n        print(\"=\" * 60)\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\nThis project is under a MIT license.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Streamlining the process of multi-prompting LLMs with chains",
    "version": "1.3.3",
    "project_urls": {
        "Homepage": "https://github.com/flatypus/flowchat",
        "Issues": "https://github.com/flatypus/flowchat/issues",
        "Repository": "https://github.com/flatypus/flowchat"
    },
    "split_keywords": [
        "openai",
        "gpt3",
        "gpt-3",
        "gpt4",
        "gpt-4",
        "chatbot",
        "ai",
        "nlp",
        "prompt",
        "prompt-engineering",
        "toolkit"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1f43e404fc28e2cc01b7927aa3ed4a0a6292068bc13966b5bd658b661cfbb136",
                "md5": "d385e6569a7821e9fca7edfd6856ae0f",
                "sha256": "e8dd36335aae989ee7331e76287b7d3a11f3e57f65a95580887e876463b661e2"
            },
            "downloads": -1,
            "filename": "flowchat-1.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d385e6569a7821e9fca7edfd6856ae0f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 10816,
            "upload_time": "2024-06-24T13:27:35",
            "upload_time_iso_8601": "2024-06-24T13:27:35.978386Z",
            "url": "https://files.pythonhosted.org/packages/1f/43/e404fc28e2cc01b7927aa3ed4a0a6292068bc13966b5bd658b661cfbb136/flowchat-1.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ca54b9867fa8452480e60ca1531576e713ce50eef2d7762cdd17e1b7e7b70a95",
                "md5": "e32e6f8537445b99da008bbb9157a80d",
                "sha256": "16050bcf4b789caa7425997e7ac38cc6a7f9159d6d9af6a14381596b7fa13e6a"
            },
            "downloads": -1,
            "filename": "flowchat-1.3.3.tar.gz",
            "has_sig": false,
            "md5_digest": "e32e6f8537445b99da008bbb9157a80d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 15260,
            "upload_time": "2024-06-24T13:27:40",
            "upload_time_iso_8601": "2024-06-24T13:27:40.217019Z",
            "url": "https://files.pythonhosted.org/packages/ca/54/b9867fa8452480e60ca1531576e713ce50eef2d7762cdd17e1b7e7b70a95/flowchat-1.3.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-24 13:27:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "flatypus",
    "github_project": "flowchat",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "flowchat"
}
        
Elapsed time: 0.25997s