chainforge


Namechainforge JSON
Version 0.3.1.8 PyPI version JSON
download
home_pagehttps://github.com/ianarawjo/ChainForge/
SummaryA Visual Programming Environment for Prompt Engineering
upload_time2024-04-30 01:18:50
maintainerNone
docs_urlNone
authorIan Arawjo
requires_python>=3.8
licenseMIT
keywords prompt engineering llm response evaluation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ⛓️🛠️ ChainForge

**An open-source visual programming environment for battle-testing prompts to LLMs.**

<img width="1517" alt="banner" src="https://github.com/ianarawjo/ChainForge/assets/5251713/570879ef-ef8a-4e00-b37c-b49bc3c1a370">

ChainForge is a data flow prompt engineering environment for analyzing and evaluating LLM responses. It is geared towards early-stage, quick-and-dirty exploration of prompts, chat responses, and response quality that goes beyond ad-hoc chatting with individual LLMs. With ChainForge, you can:

- **Query multiple LLMs at once** to test prompt ideas and variations quickly and effectively.
- **Compare response quality across prompt permutations, across models, and across model settings** to choose the best prompt and model for your use case.
- **Setup evaluation metrics** (scoring function) and immediately visualize results across prompts, prompt parameters, models, and model settings.
- **Hold multiple conversations at once across template parameters and chat models.** Template not just prompts, but follow-up chat messages, and inspect and evaluate outputs at each turn of a chat conversation.

[Read the docs to learn more.](https://chainforge.ai/docs/) ChainForge comes with a number of example evaluation flows to give you a sense of what's possible, including 188 example flows generated from benchmarks in OpenAI evals.

**This is an open beta of Chainforge.** We support model providers OpenAI, HuggingFace, Anthropic, Google PaLM2, Azure OpenAI endpoints, and [Dalai](https://github.com/cocktailpeanut/dalai)-hosted models Alpaca and Llama. You can change the exact model and individual model settings. Visualization nodes support numeric and boolean evaluation metrics. Try it and let us know what you think! :)

ChainForge is built on [ReactFlow](https://reactflow.dev) and [Flask](https://flask.palletsprojects.com/en/2.3.x/).

# Table of Contents

- [Documentation](https://chainforge.ai/docs/)
- [Installation](#installation)
- [Example Experiments](#example-experiments)
- [Share with Others](#share-with-others)
- [Features](#features) (see the [docs](https://chainforge.ai/docs/nodes/) for more comprehensive info)
- [Development and How to Cite](#development)

# Installation

You can install ChainForge locally, or try it out on the web at **https://chainforge.ai/play/**. The web version of ChainForge has a limited feature set. In a locally installed version you can load API keys automatically from environment variables, write Python code to evaluate LLM responses, or query locally-run Alpaca/Llama models hosted via Dalai.

To install Chainforge on your machine, make sure you have Python 3.8 or higher, then run

```bash
pip install chainforge
```

Once installed, do

```bash
chainforge serve
```

Open [localhost:8000](http://localhost:8000/) in a Google Chrome, Firefox, Microsoft Edge, or Brave browser.

You can set your API keys by clicking the Settings icon in the top-right corner. If you prefer to not worry about this everytime you open ChainForge, we recommend that save your OpenAI, Anthropic, Google PaLM API keys and/or Amazon AWS credentials to your local environment. For more details, see the [How to Install](https://chainforge.ai/docs/getting_started/).

## Run using Docker

You can use our [Dockerfile](/Dockerfile) to run `ChainForge` locally using `Docker Desktop`:

- Build the `Dockerfile`:
  ```shell
  docker build -t chainforge .
  ```

- Run the image:
  ```shell
  docker run -p 8000:8000 chainforge
  ```

Now you can open the browser of your choice and open `http://127.0.0.1:8000`.

# Supported providers

- OpenAI
- Anthropic
- Google (Gemini, PaLM2)
- HuggingFace (Inference and Endpoints)
- [Ollama](https://github.com/jmorganca/ollama) (locally-hosted models)
- Microsoft Azure OpenAI Endpoints
- [AlephAlpha](https://app.aleph-alpha.com/)
- Foundation models via Amazon Bedrock on-demand inference, including Anthropic Claude 3
- ...and any other provider through [custom provider scripts](https://chainforge.ai/docs/custom_providers/)!

# Example experiments

We've prepared a couple example flows to give you a sense of what's possible with Chainforge.
Click the "Example Flows" button on the top-right corner and select one. Here is a basic comparison example, plotting the length of responses across different models and arguments for the prompt parameter `{game}`:

<img width="1593" alt="basic-compare" src="https://github.com/ianarawjo/ChainForge/assets/5251713/43c87ab7-aabd-41ba-8d9b-e7e9ebe25c75">

You can also conduct **ground truth evaluations** using Tabular Data nodes. For instance, we can compare each LLM's ability to answer math problems by comparing each response to the expected answer:

<img width="1775" alt="Screen Shot 2023-07-04 at 9 21 50 AM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/6d842f7a-f747-44f9-b317-95bec73653c5">

# Compare responses across models and prompts

Compare across models and prompt variables with an interactive response inspector, including a formatted table and exportable data:

<img width="1460" alt="Screen Shot 2023-07-19 at 5 03 55 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/6aca2bd7-7820-4256-9e8b-3a87795f3e50">

Here's [a tutorial to get started comparing across prompt templates](https://chainforge.ai/docs/compare_prompts/).

# Share with others

The web version of ChainForge (https://chainforge.ai/play/) includes a Share button.

Simply click Share to generate a unique link for your flow and copy it to your clipboard:

![ezgif-2-a4d8048bba](https://github.com/ianarawjo/ChainForge/assets/5251713/1c69900b-5a0f-4055-bbd3-ea191e93ecde)

For instance, here's a experiment I made that tries to get an LLM to reveal a secret key: https://chainforge.ai/play/?f=28puvwc788bog

> **Note**
> To prevent abuse, you can only share up to 10 flows at a time, and each flow must be <5MB after compression.
> If you share more than 10 flows, the oldest link will break, so make sure to always Export important flows to `cforge` files,
> and use Share to only pass data ephemerally.

For finer details about the features of specific nodes, check out the [List of Nodes](https://chainforge.ai/docs/nodes/).

# Features

A key goal of ChainForge is facilitating **comparison** and **evaluation** of prompts and models. Basic features are:

- **Prompt permutations**: Setup a prompt template and feed it variations of input variables. ChainForge will prompt all selected LLMs with all possible permutations of the input prompt, so that you can get a better sense of prompt quality. You can also chain prompt templates at arbitrary depth (e.g., to compare templates).
- **Chat turns**: Go beyond prompts and template follow-up chat messages, just like prompts. You can test how the wording of the user's query might change an LLM's output, or compare quality of later responses across multiple chat models (or the same chat model with different settings!).
- **Model settings**: Change the settings of supported models, and compare across settings. For instance, you can measure the impact of a system message on ChatGPT by adding several ChatGPT models, changing individual settings, and nicknaming each one. ChainForge will send out queries to each version of the model.
- **Evaluation nodes**: Probe LLM responses in a chain and test them (classically) for some desired behavior. At a basic level, this is Python script based. We plan to add preset evaluator nodes for common use cases in the near future (e.g., name-entity recognition). Note that you can also chain LLM responses into prompt templates to help evaluate outputs cheaply before more extensive evaluation methods.
- **Visualization nodes**: Visualize evaluation results on plots like grouped box-and-whisker (for numeric metrics) and histograms (for boolean metrics). Currently we only support numeric and boolean metrics. We aim to provide users more control and options for plotting in the future.

Taken together, these features let you easily:

- **Compare across prompts and prompt parameters**: Choose the best set of prompts that maximizes your eval target metrics (e.g., lowest code error rate). Or, see how changing parameters in a prompt template affects the quality of responses.
- **Compare across models**: Compare responses for every prompt across models and different model settings.

We've also found that some users simply want to use ChainForge to make tons of parametrized queries to LLMs (e.g., chaining prompt templates into prompt templates), possibly score them, and then output the results to a spreadsheet (Excel `xlsx`). To do this, attach an Inspect node to the output of a Prompt node and click `Export Data`.

For more specific details, see our [documentation](https://chainforge.ai/docs/nodes/).

---

# Development

ChainForge was created by [Ian Arawjo](http://ianarawjo.com/index.html), a postdoctoral scholar in Harvard HCI's [Glassman Lab](http://glassmanlab.seas.harvard.edu/) with support from the Harvard HCI community. Collaborators include PhD students [Priyan Vaithilingam](https://priyan.info) and [Chelse Swoopes](https://seas.harvard.edu/person/chelse-swoopes), Harvard undergraduate [Sean Yang](https://shawsean.com), and faculty members [Elena Glassman](http://glassmanlab.seas.harvard.edu/glassman.html) and [Martin Wattenberg](https://www.bewitched.com/about.html).

This work was partially funded by the NSF grants IIS-2107391, IIS-2040880, and IIS-1955699. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

We provide ongoing releases of this tool in the hopes that others find it useful for their projects.

## Inspiration and Links

ChainForge is meant to be general-purpose, and is not developed for a specific API or LLM back-end. Our ultimate goal is integration into other tools for the systematic evaluation and auditing of LLMs. We hope to help others who are developing prompt-analysis flows in LLMs, or otherwise auditing LLM outputs. This project was inspired by own our use case, but also shares some comraderie with two related (closed-source) research projects, both led by [Sherry Wu](https://www.cs.cmu.edu/~sherryw/):

- "PromptChainer: Chaining Large Language Model Prompts through Visual Programming" (Wu et al., CHI ’22 LBW) [Video](https://www.youtube.com/watch?v=p6MA8q19uo0)
- "AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts" (Wu et al., CHI ’22)

Unlike these projects, we are focusing on supporting evaluation across prompts, prompt parameters, and models.

## How to collaborate?

We welcome open-source collaborators. If you want to report a bug or request a feature, open an [Issue](https://github.com/ianarawjo/ChainForge/issues). We also encourage users to implement the requested feature / bug fix and submit a Pull Request.

---

# Cite Us

If you use ChainForge for research purposes, or build upon the source code, we ask that you cite our [arXiv pre-print](https://arxiv.org/abs/2309.09128) in any related publications.
The BibTeX you can use is:

```bibtex
@misc{arawjo2023chainforge,
      title={ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing},
      author={Ian Arawjo and Chelse Swoopes and Priyan Vaithilingam and Martin Wattenberg and Elena Glassman},
      year={2023},
      eprint={2309.09128},
      archivePrefix={arXiv},
      primaryClass={cs.HC}
}
```

# License

ChainForge is released under the MIT License.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ianarawjo/ChainForge/",
    "name": "chainforge",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "prompt engineering LLM response evaluation",
    "author": "Ian Arawjo",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/fb/9f/ad6cd93c69a296fd8e39281357c7a0b774597f6d1a4f437134a0491be6a8/chainforge-0.3.1.8.tar.gz",
    "platform": null,
    "description": "# \u26d3\ufe0f\ud83d\udee0\ufe0f ChainForge\n\n**An open-source visual programming environment for battle-testing prompts to LLMs.**\n\n<img width=\"1517\" alt=\"banner\" src=\"https://github.com/ianarawjo/ChainForge/assets/5251713/570879ef-ef8a-4e00-b37c-b49bc3c1a370\">\n\nChainForge is a data flow prompt engineering environment for analyzing and evaluating LLM responses. It is geared towards early-stage, quick-and-dirty exploration of prompts, chat responses, and response quality that goes beyond ad-hoc chatting with individual LLMs. With ChainForge, you can:\n\n- **Query multiple LLMs at once** to test prompt ideas and variations quickly and effectively.\n- **Compare response quality across prompt permutations, across models, and across model settings** to choose the best prompt and model for your use case.\n- **Setup evaluation metrics** (scoring function) and immediately visualize results across prompts, prompt parameters, models, and model settings.\n- **Hold multiple conversations at once across template parameters and chat models.** Template not just prompts, but follow-up chat messages, and inspect and evaluate outputs at each turn of a chat conversation.\n\n[Read the docs to learn more.](https://chainforge.ai/docs/) ChainForge comes with a number of example evaluation flows to give you a sense of what's possible, including 188 example flows generated from benchmarks in OpenAI evals.\n\n**This is an open beta of Chainforge.** We support model providers OpenAI, HuggingFace, Anthropic, Google PaLM2, Azure OpenAI endpoints, and [Dalai](https://github.com/cocktailpeanut/dalai)-hosted models Alpaca and Llama. You can change the exact model and individual model settings. Visualization nodes support numeric and boolean evaluation metrics. Try it and let us know what you think! :)\n\nChainForge is built on [ReactFlow](https://reactflow.dev) and [Flask](https://flask.palletsprojects.com/en/2.3.x/).\n\n# Table of Contents\n\n- [Documentation](https://chainforge.ai/docs/)\n- [Installation](#installation)\n- [Example Experiments](#example-experiments)\n- [Share with Others](#share-with-others)\n- [Features](#features) (see the [docs](https://chainforge.ai/docs/nodes/) for more comprehensive info)\n- [Development and How to Cite](#development)\n\n# Installation\n\nYou can install ChainForge locally, or try it out on the web at **https://chainforge.ai/play/**. The web version of ChainForge has a limited feature set. In a locally installed version you can load API keys automatically from environment variables, write Python code to evaluate LLM responses, or query locally-run Alpaca/Llama models hosted via Dalai.\n\nTo install Chainforge on your machine, make sure you have Python 3.8 or higher, then run\n\n```bash\npip install chainforge\n```\n\nOnce installed, do\n\n```bash\nchainforge serve\n```\n\nOpen [localhost:8000](http://localhost:8000/) in a Google Chrome, Firefox, Microsoft Edge, or Brave browser.\n\nYou can set your API keys by clicking the Settings icon in the top-right corner. If you prefer to not worry about this everytime you open ChainForge, we recommend that save your OpenAI, Anthropic, Google PaLM API keys and/or Amazon AWS credentials to your local environment. For more details, see the [How to Install](https://chainforge.ai/docs/getting_started/).\n\n## Run using Docker\n\nYou can use our [Dockerfile](/Dockerfile) to run `ChainForge` locally using `Docker Desktop`:\n\n- Build the `Dockerfile`:\n  ```shell\n  docker build -t chainforge .\n  ```\n\n- Run the image:\n  ```shell\n  docker run -p 8000:8000 chainforge\n  ```\n\nNow you can open the browser of your choice and open `http://127.0.0.1:8000`.\n\n# Supported providers\n\n- OpenAI\n- Anthropic\n- Google (Gemini, PaLM2)\n- HuggingFace (Inference and Endpoints)\n- [Ollama](https://github.com/jmorganca/ollama) (locally-hosted models)\n- Microsoft Azure OpenAI Endpoints\n- [AlephAlpha](https://app.aleph-alpha.com/)\n- Foundation models via Amazon Bedrock on-demand inference, including Anthropic Claude 3\n- ...and any other provider through [custom provider scripts](https://chainforge.ai/docs/custom_providers/)!\n\n# Example experiments\n\nWe've prepared a couple example flows to give you a sense of what's possible with Chainforge.\nClick the \"Example Flows\" button on the top-right corner and select one. Here is a basic comparison example, plotting the length of responses across different models and arguments for the prompt parameter `{game}`:\n\n<img width=\"1593\" alt=\"basic-compare\" src=\"https://github.com/ianarawjo/ChainForge/assets/5251713/43c87ab7-aabd-41ba-8d9b-e7e9ebe25c75\">\n\nYou can also conduct **ground truth evaluations** using Tabular Data nodes. For instance, we can compare each LLM's ability to answer math problems by comparing each response to the expected answer:\n\n<img width=\"1775\" alt=\"Screen Shot 2023-07-04 at 9 21 50 AM\" src=\"https://github.com/ianarawjo/ChainForge/assets/5251713/6d842f7a-f747-44f9-b317-95bec73653c5\">\n\n# Compare responses across models and prompts\n\nCompare across models and prompt variables with an interactive response inspector, including a formatted table and exportable data:\n\n<img width=\"1460\" alt=\"Screen Shot 2023-07-19 at 5 03 55 PM\" src=\"https://github.com/ianarawjo/ChainForge/assets/5251713/6aca2bd7-7820-4256-9e8b-3a87795f3e50\">\n\nHere's [a tutorial to get started comparing across prompt templates](https://chainforge.ai/docs/compare_prompts/).\n\n# Share with others\n\nThe web version of ChainForge (https://chainforge.ai/play/) includes a Share button.\n\nSimply click Share to generate a unique link for your flow and copy it to your clipboard:\n\n![ezgif-2-a4d8048bba](https://github.com/ianarawjo/ChainForge/assets/5251713/1c69900b-5a0f-4055-bbd3-ea191e93ecde)\n\nFor instance, here's a experiment I made that tries to get an LLM to reveal a secret key: https://chainforge.ai/play/?f=28puvwc788bog\n\n> **Note**\n> To prevent abuse, you can only share up to 10 flows at a time, and each flow must be <5MB after compression.\n> If you share more than 10 flows, the oldest link will break, so make sure to always Export important flows to `cforge` files,\n> and use Share to only pass data ephemerally.\n\nFor finer details about the features of specific nodes, check out the [List of Nodes](https://chainforge.ai/docs/nodes/).\n\n# Features\n\nA key goal of ChainForge is facilitating **comparison** and **evaluation** of prompts and models. Basic features are:\n\n- **Prompt permutations**: Setup a prompt template and feed it variations of input variables. ChainForge will prompt all selected LLMs with all possible permutations of the input prompt, so that you can get a better sense of prompt quality. You can also chain prompt templates at arbitrary depth (e.g., to compare templates).\n- **Chat turns**: Go beyond prompts and template follow-up chat messages, just like prompts. You can test how the wording of the user's query might change an LLM's output, or compare quality of later responses across multiple chat models (or the same chat model with different settings!).\n- **Model settings**: Change the settings of supported models, and compare across settings. For instance, you can measure the impact of a system message on ChatGPT by adding several ChatGPT models, changing individual settings, and nicknaming each one. ChainForge will send out queries to each version of the model.\n- **Evaluation nodes**: Probe LLM responses in a chain and test them (classically) for some desired behavior. At a basic level, this is Python script based. We plan to add preset evaluator nodes for common use cases in the near future (e.g., name-entity recognition). Note that you can also chain LLM responses into prompt templates to help evaluate outputs cheaply before more extensive evaluation methods.\n- **Visualization nodes**: Visualize evaluation results on plots like grouped box-and-whisker (for numeric metrics) and histograms (for boolean metrics). Currently we only support numeric and boolean metrics. We aim to provide users more control and options for plotting in the future.\n\nTaken together, these features let you easily:\n\n- **Compare across prompts and prompt parameters**: Choose the best set of prompts that maximizes your eval target metrics (e.g., lowest code error rate). Or, see how changing parameters in a prompt template affects the quality of responses.\n- **Compare across models**: Compare responses for every prompt across models and different model settings.\n\nWe've also found that some users simply want to use ChainForge to make tons of parametrized queries to LLMs (e.g., chaining prompt templates into prompt templates), possibly score them, and then output the results to a spreadsheet (Excel `xlsx`). To do this, attach an Inspect node to the output of a Prompt node and click `Export Data`.\n\nFor more specific details, see our [documentation](https://chainforge.ai/docs/nodes/).\n\n---\n\n# Development\n\nChainForge was created by [Ian Arawjo](http://ianarawjo.com/index.html), a postdoctoral scholar in Harvard HCI's [Glassman Lab](http://glassmanlab.seas.harvard.edu/) with support from the Harvard HCI community. Collaborators include PhD students [Priyan Vaithilingam](https://priyan.info) and [Chelse Swoopes](https://seas.harvard.edu/person/chelse-swoopes), Harvard undergraduate [Sean Yang](https://shawsean.com), and faculty members [Elena Glassman](http://glassmanlab.seas.harvard.edu/glassman.html) and [Martin Wattenberg](https://www.bewitched.com/about.html).\n\nThis work was partially funded by the NSF grants IIS-2107391, IIS-2040880, and IIS-1955699. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\n\nWe provide ongoing releases of this tool in the hopes that others find it useful for their projects.\n\n## Inspiration and Links\n\nChainForge is meant to be general-purpose, and is not developed for a specific API or LLM back-end. Our ultimate goal is integration into other tools for the systematic evaluation and auditing of LLMs. We hope to help others who are developing prompt-analysis flows in LLMs, or otherwise auditing LLM outputs. This project was inspired by own our use case, but also shares some comraderie with two related (closed-source) research projects, both led by [Sherry Wu](https://www.cs.cmu.edu/~sherryw/):\n\n- \"PromptChainer: Chaining Large Language Model Prompts through Visual Programming\" (Wu et al., CHI \u201922 LBW) [Video](https://www.youtube.com/watch?v=p6MA8q19uo0)\n- \"AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts\" (Wu et al., CHI \u201922)\n\nUnlike these projects, we are focusing on supporting evaluation across prompts, prompt parameters, and models.\n\n## How to collaborate?\n\nWe welcome open-source collaborators. If you want to report a bug or request a feature, open an [Issue](https://github.com/ianarawjo/ChainForge/issues). We also encourage users to implement the requested feature / bug fix and submit a Pull Request.\n\n---\n\n# Cite Us\n\nIf you use ChainForge for research purposes, or build upon the source code, we ask that you cite our [arXiv pre-print](https://arxiv.org/abs/2309.09128) in any related publications.\nThe BibTeX you can use is:\n\n```bibtex\n@misc{arawjo2023chainforge,\n      title={ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing},\n      author={Ian Arawjo and Chelse Swoopes and Priyan Vaithilingam and Martin Wattenberg and Elena Glassman},\n      year={2023},\n      eprint={2309.09128},\n      archivePrefix={arXiv},\n      primaryClass={cs.HC}\n}\n```\n\n# License\n\nChainForge is released under the MIT License.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Visual Programming Environment for Prompt Engineering",
    "version": "0.3.1.8",
    "project_urls": {
        "Homepage": "https://github.com/ianarawjo/ChainForge/"
    },
    "split_keywords": [
        "prompt",
        "engineering",
        "llm",
        "response",
        "evaluation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fb9fad6cd93c69a296fd8e39281357c7a0b774597f6d1a4f437134a0491be6a8",
                "md5": "249e4864a2e410574dc6f6da6fe40afc",
                "sha256": "642924b6305f9ed5b98b64fc15d86837a489d61e131d0f2772b6c5a6697d2fdb"
            },
            "downloads": -1,
            "filename": "chainforge-0.3.1.8.tar.gz",
            "has_sig": false,
            "md5_digest": "249e4864a2e410574dc6f6da6fe40afc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 6181889,
            "upload_time": "2024-04-30T01:18:50",
            "upload_time_iso_8601": "2024-04-30T01:18:50.567266Z",
            "url": "https://files.pythonhosted.org/packages/fb/9f/ad6cd93c69a296fd8e39281357c7a0b774597f6d1a4f437134a0491be6a8/chainforge-0.3.1.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-30 01:18:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ianarawjo",
    "github_project": "ChainForge",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "chainforge"
}
        
Elapsed time: 0.26619s