| Name | applang JSON |
| Version |
0.1.3
JSON |
| download |
| home_page | None |
| Summary | A Prompt Programming Language |
| upload_time | 2024-10-20 22:50:34 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.9 |
| license | MIT |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# 🍎APPL: A Prompt Programming Language
[](https://pypi.python.org/pypi/applang)
[](https://www.python.org)
[](https://pre-commit.com/)
[](https://github.com/astral-sh/ruff)
[](https://github.com/psf/black)
[](http://mypy-lang.org/)
[](https://mit-license.org/)
[](https://discord.gg/q3x4Qwgj29)
[](https://arxiv.org/abs/2406.13161)
**APPL** is A Prompt Programming Language that extends Python to provide a Natural, Intuitive, Convenient, and Efficient (NICE) way to utilize Large Language Models (LLMs) such as GPT in your program.
<video style="width: 100%" src="https://github.com/appl-team/appl/assets/12556773/5d75d3db-1b1c-48c9-97ec-e9d72a387e49" type="video/mp4" controls></video>
## Key Features
- **Readability and maintainability via seamless integration with Python.** APPL seamlessly embeds natural language prompts into Python programs, maintaining prompts' readability while inheriting modularity, reusability, dynamism and the ecosystem from the host programming language.
- **Flexible prompt engineering.** Except for allowing the utilization of Python control flows and the modularized decomposition of prompts, APPL offers prompt coding helpers to facilitate programming prompts in a modularized and maintainable way.
- **Automatic parallelization via asynchronous computation.** APPL schedules LLM calls asynchronously, leveraging potential independence among them to facilitate efficient parallelization. This offloads the burden of users to manage synchronization manually, with almost no extra work.
- **Smooth tool calling integration.** APPL provides intuitive ways to transform Python functions into tools that can be called by LLMs, making it easy for users to integrate existing Python libraries and functions with LLMs.
- **Tracing and Failure Recovery.** APPL traces the execution of LLM calls and supports recovery from failures, which is essential for debugging and error handling in the LLM programming paradigm.
- **More Features.** APPL also provides a unified interface for multiple LLM backends using [`litellm`](https://docs.litellm.ai/docs/), structured generations using [`instructor`](https://python.useinstructor.com/), and many other features.
## News
* **[2024-07-12]**: We have improved our [tutorial](https://appl-team.github.io/appl/tutorials/). Please check them out for more detailed usage and examples.
<!-- and [cookbook](https://appl-team.github.io/appl/tutorials/) -->
## Quick Start
### Installation
You can simply install APPL from PyPI using pip:
```bash
pip install -U applang
```
More installation options can be found in the [installation guide](https://appl-team.github.io/appl/install).
### Setup
You need to set up API keys or your own LLM backends to interact with LLMs.
In this guide, we use OpenAI API as the default backend.
You can set your OpenAI API key in the `.env` file in the root directory of your project:
```
OPENAI_API_KEY=<your openai api key>
```
or export it as an environment variable:
```bash
export OPENAI_API_KEY=<your openai api key>
```
For setting up other backends, enabling tracing and recovering from traces, please refer to the [setup guide](https://appl-team.github.io/appl/setup).
### Hello World
To begin, let's create a simple function that uses LLM to respond to a greeting.
```python
import appl
from appl import gen, ppl
appl.init() # initialize APPL
@ppl # the @ppl decorator marks the function as an `APPL function`
def greeting(name: str):
f"Hello World! My name is {name}." # Add text to the prompt
return gen() # call the default LLM with the current prompt
print(greeting("APPL")) # call `greeting` as a normal Python function
```
The prompt for the generation is:
```
Hello World! My name is APPL.
```
The output will look like
```
Nice to meet you, APPL!
```
In this example, the `@ppl` decorator (`@` stands for `a` here) marks the `hello_world` function as an *APPL function*. Within such a function, the standalone string `f"Hello World! My name is {name}."` is added to the prompt, and the `gen()` function calls LLM to generate responses using the current prompt.
### Question Answering
Let's then implement a question-answering system using APPL. In this example, the APPL program answers multiple questions about a quotation by first extracting the author's name (inspired by [this cookbook](https://cookbook.openai.com/articles/how_to_work_with_large_language_models)). [Here](https://colab.research.google.com/drive/1khZcleOrdLOWtUB4EMEQCjGA1vBaARI9) is a runnable Colab notebook of this example.
```python linenums="1" hl_lines="9 14 15 17"
import appl
from appl import AIRole, gen, ppl
from appl.const import NEWLINE
appl.init()
@ppl(ctx="copy") # copy the context from caller
def get_answer(question: str):
question # append to the prompt
return gen() # return as a future object
@ppl # marks APPL function
def answer_questions(quotation: str, questions: list[str]):
"Extract the name of the author from the quotation below and answer questions."
quotation # append to the prompt
with AIRole(): # assistant message
f"The name of the author is {gen(stop=NEWLINE)}" # specify the prefix
return [get_answer(q) for q in questions] # parallelize calls
quotation = '"Simplicity is the ultimate sophistication." -- Leonardo da Vinci'
questions = [
"In what era did the author live?",
# more questions can be added here
]
for ans in answer_questions(quotation, questions):
print(ans)
```
The resulting conversation for the first question would look like (generated responses are in **bold**):
| Role | Message |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| *User* | Extract the name of the author from the quotation below and answer questions.<br>"Simplicity is the ultimate sophistication." -- Leonardo da Vinci |
| *Assistant* | The name of the author is **Leonardo da Vinci.** |
| *User* | In what era did the author live? |
| *Assistant* | **Leonardo da Vinci lived during the Renaissance era.** |
In *APPL functions*, [expression statements](https://docs.python.org/3/reference/simple_stmts.html#expression-statements) are captured as prompts [based on the type of its value](https://appl-team.github.io/appl/tutorials/appendix/prompt_capture/). Notably, the f-string is processed part by part, so the `gen` function inside the f-string intuitively uses the contents before that. In this example, `The name of the author is ` serves as a prefix to guide the completion of the author's name.
After the author's name is extracted, the `get_answer` function is called multiple times in parallel to answer the questions, with the context being copied (detailed in [context-management](#context-management)), demonstrating the automatic parallelization feature of APPL.
## RoadMap
- [x] Default to exclude """docstring""" from the prompt formation.
- [ ] Use FastAPI to build a server for inspecting the traces.
- [ ] Add more ... (contributions are welcome!)
- [ ] Examples and tutorials to demonstrate the usage
- [ ] Test cases to increase the coverage
## Tutorial and Cookbook
For a more comprehensive tutorial, please refer to the [tutorial](https://appl-team.github.io/appl/tutorials).
### Table of Contents
- [Introduction](https://appl-team.github.io/appl/tutorials/intro)
- [Getting Started](https://appl-team.github.io/appl/tutorials/1_get_started)
- [Example: QA with LMs](https://appl-team.github.io/appl/tutorials/2_qa_example)
- [APPL Function](https://appl-team.github.io/appl/tutorials/3_appl_function)
- [Concurrent LM Calls](https://appl-team.github.io/appl/tutorials/4_concurrent)
- [Tool Calls for LMs](https://appl-team.github.io/appl/tutorials/5_tool_calls)
- [Prompt Coding Helpers](https://appl-team.github.io/appl/tutorials/6_prompt_coding)
- [Using Tracing](https://appl-team.github.io/appl/tutorials/7_tracing)
### Cookbook
For more detailed usage and examples, please refer to the [cookbook](https://appl-team.github.io/appl/cookbook).
APPL can be used to reproduce some popular LM-based applications easily, such as:
* [Wordware's TwitterPersonality](https://twitter.wordware.ai/)[[APPL implementation](https://github.com/appl-team/TwitterPersonality)]: analyzes your tweets to determine your Twitter personality.
## Citation and Acknowledgment
If you find APPL helpful, please consider citing our paper:
```bibtex
@article{dong2024appl,
title={APPL: A Prompt Programming Language for Harmonious Integration of Programs and Large Language Model Prompts},
author={Dong, Honghua and Su, Qidong and Gao, Yubo and Li, Zhaoyu and Ruan, Yangjun and Pekhimenko, Gennady and Maddison, Chris J and Si, Xujie},
journal={arXiv preprint arXiv:2406.13161},
year={2024}
}
```
We would like to thank the open-source community for their contributions, where we learned from or used these libraries in our project, including
[instructor](https://github.com/jxnl/instructor),
[LiteLLM](https://github.com/BerriAI/litellm),
[LMQL](https://github.com/eth-sri/lmql),
[Guidance](https://github.com/guidance-ai/guidance),
[SGLang](https://github.com/sgl-project/sglang) and
[autogen](https://github.com/microsoft/autogen).
## License
This project is licensed under the terms of the MIT License.
Raw data
{
"_id": null,
"home_page": null,
"name": "applang",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Honghua Dong <dhh19951@gmail.com>, QiDong Su <soodoshll@gmail.com>, Jim Gao <ybgao@cs.toronto.edu>",
"download_url": "https://files.pythonhosted.org/packages/f0/58/102090cc40ff95e2cb4d742695d219d1c43442bb083012f473182df4eac7/applang-0.1.3.tar.gz",
"platform": null,
"description": "# \ud83c\udf4eAPPL: A Prompt Programming Language\n\n[](https://pypi.python.org/pypi/applang)\n[](https://www.python.org)\n[](https://pre-commit.com/)\n[](https://github.com/astral-sh/ruff)\n[](https://github.com/psf/black)\n[](http://mypy-lang.org/)\n[](https://mit-license.org/)\n[](https://discord.gg/q3x4Qwgj29)\n[](https://arxiv.org/abs/2406.13161)\n\n**APPL** is A Prompt Programming Language that extends Python to provide a Natural, Intuitive, Convenient, and Efficient (NICE) way to utilize Large Language Models (LLMs) such as GPT in your program.\n\n<video style=\"width: 100%\" src=\"https://github.com/appl-team/appl/assets/12556773/5d75d3db-1b1c-48c9-97ec-e9d72a387e49\" type=\"video/mp4\" controls></video>\n\n## Key Features\n- **Readability and maintainability via seamless integration with Python.** APPL seamlessly embeds natural language prompts into Python programs, maintaining prompts' readability while inheriting modularity, reusability, dynamism and the ecosystem from the host programming language.\n- **Flexible prompt engineering.** Except for allowing the utilization of Python control flows and the modularized decomposition of prompts, APPL offers prompt coding helpers to facilitate programming prompts in a modularized and maintainable way.\n- **Automatic parallelization via asynchronous computation.** APPL schedules LLM calls asynchronously, leveraging potential independence among them to facilitate efficient parallelization. This offloads the burden of users to manage synchronization manually, with almost no extra work.\n- **Smooth tool calling integration.** APPL provides intuitive ways to transform Python functions into tools that can be called by LLMs, making it easy for users to integrate existing Python libraries and functions with LLMs.\n- **Tracing and Failure Recovery.** APPL traces the execution of LLM calls and supports recovery from failures, which is essential for debugging and error handling in the LLM programming paradigm.\n- **More Features.** APPL also provides a unified interface for multiple LLM backends using [`litellm`](https://docs.litellm.ai/docs/), structured generations using [`instructor`](https://python.useinstructor.com/), and many other features.\n\n## News\n* **[2024-07-12]**: We have improved our [tutorial](https://appl-team.github.io/appl/tutorials/). Please check them out for more detailed usage and examples.\n<!-- and [cookbook](https://appl-team.github.io/appl/tutorials/) -->\n\n## Quick Start\n\n### Installation\nYou can simply install APPL from PyPI using pip:\n```bash\npip install -U applang\n```\nMore installation options can be found in the [installation guide](https://appl-team.github.io/appl/install).\n\n### Setup\nYou need to set up API keys or your own LLM backends to interact with LLMs.\n\nIn this guide, we use OpenAI API as the default backend.\nYou can set your OpenAI API key in the `.env` file in the root directory of your project:\n```\nOPENAI_API_KEY=<your openai api key>\n```\n\nor export it as an environment variable:\n\n```bash\nexport OPENAI_API_KEY=<your openai api key>\n```\n\nFor setting up other backends, enabling tracing and recovering from traces, please refer to the [setup guide](https://appl-team.github.io/appl/setup).\n\n### Hello World\n\nTo begin, let's create a simple function that uses LLM to respond to a greeting.\n\n```python\nimport appl\nfrom appl import gen, ppl\n\nappl.init() # initialize APPL\n\n@ppl # the @ppl decorator marks the function as an `APPL function`\ndef greeting(name: str):\n f\"Hello World! My name is {name}.\" # Add text to the prompt\n return gen() # call the default LLM with the current prompt\n\nprint(greeting(\"APPL\")) # call `greeting` as a normal Python function\n```\n\nThe prompt for the generation is:\n```\nHello World! My name is APPL.\n```\n\nThe output will look like\n```\nNice to meet you, APPL!\n```\n\nIn this example, the `@ppl` decorator (`@` stands for `a` here) marks the `hello_world` function as an *APPL function*. Within such a function, the standalone string `f\"Hello World! My name is {name}.\"` is added to the prompt, and the `gen()` function calls LLM to generate responses using the current prompt.\n\n### Question Answering\n\nLet's then implement a question-answering system using APPL. In this example, the APPL program answers multiple questions about a quotation by first extracting the author's name (inspired by [this cookbook](https://cookbook.openai.com/articles/how_to_work_with_large_language_models)). [Here](https://colab.research.google.com/drive/1khZcleOrdLOWtUB4EMEQCjGA1vBaARI9) is a runnable Colab notebook of this example.\n\n```python linenums=\"1\" hl_lines=\"9 14 15 17\"\nimport appl\nfrom appl import AIRole, gen, ppl\nfrom appl.const import NEWLINE\n\nappl.init()\n\n@ppl(ctx=\"copy\") # copy the context from caller\ndef get_answer(question: str):\n question # append to the prompt\n return gen() # return as a future object\n\n@ppl # marks APPL function\ndef answer_questions(quotation: str, questions: list[str]):\n \"Extract the name of the author from the quotation below and answer questions.\"\n quotation # append to the prompt\n with AIRole(): # assistant message\n f\"The name of the author is {gen(stop=NEWLINE)}\" # specify the prefix\n return [get_answer(q) for q in questions] # parallelize calls\n\nquotation = '\"Simplicity is the ultimate sophistication.\" -- Leonardo da Vinci'\nquestions = [\n \"In what era did the author live?\",\n # more questions can be added here\n]\nfor ans in answer_questions(quotation, questions):\n print(ans)\n```\n\nThe resulting conversation for the first question would look like (generated responses are in **bold**):\n\n| Role | Message |\n| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |\n| *User* | Extract the name of the author from the quotation below and answer questions.<br>\"Simplicity is the ultimate sophistication.\" -- Leonardo da Vinci |\n| *Assistant* | The name of the author is **Leonardo da Vinci.** |\n| *User* | In what era did the author live? |\n| *Assistant* | **Leonardo da Vinci lived during the Renaissance era.** |\n\nIn *APPL functions*, [expression statements](https://docs.python.org/3/reference/simple_stmts.html#expression-statements) are captured as prompts [based on the type of its value](https://appl-team.github.io/appl/tutorials/appendix/prompt_capture/). Notably, the f-string is processed part by part, so the `gen` function inside the f-string intuitively uses the contents before that. In this example, `The name of the author is ` serves as a prefix to guide the completion of the author's name.\n\nAfter the author's name is extracted, the `get_answer` function is called multiple times in parallel to answer the questions, with the context being copied (detailed in [context-management](#context-management)), demonstrating the automatic parallelization feature of APPL.\n\n## RoadMap\n- [x] Default to exclude \"\"\"docstring\"\"\" from the prompt formation.\n- [ ] Use FastAPI to build a server for inspecting the traces.\n- [ ] Add more ... (contributions are welcome!)\n - [ ] Examples and tutorials to demonstrate the usage\n - [ ] Test cases to increase the coverage\n\n## Tutorial and Cookbook\nFor a more comprehensive tutorial, please refer to the [tutorial](https://appl-team.github.io/appl/tutorials).\n\n### Table of Contents\n- [Introduction](https://appl-team.github.io/appl/tutorials/intro)\n- [Getting Started](https://appl-team.github.io/appl/tutorials/1_get_started)\n- [Example: QA with LMs](https://appl-team.github.io/appl/tutorials/2_qa_example)\n- [APPL Function](https://appl-team.github.io/appl/tutorials/3_appl_function)\n- [Concurrent LM Calls](https://appl-team.github.io/appl/tutorials/4_concurrent)\n- [Tool Calls for LMs](https://appl-team.github.io/appl/tutorials/5_tool_calls)\n- [Prompt Coding Helpers](https://appl-team.github.io/appl/tutorials/6_prompt_coding)\n- [Using Tracing](https://appl-team.github.io/appl/tutorials/7_tracing)\n\n### Cookbook\nFor more detailed usage and examples, please refer to the [cookbook](https://appl-team.github.io/appl/cookbook).\n\nAPPL can be used to reproduce some popular LM-based applications easily, such as:\n* [Wordware's TwitterPersonality](https://twitter.wordware.ai/)[[APPL implementation](https://github.com/appl-team/TwitterPersonality)]: analyzes your tweets to determine your Twitter personality.\n\n## Citation and Acknowledgment\nIf you find APPL helpful, please consider citing our paper:\n```bibtex\n@article{dong2024appl,\n title={APPL: A Prompt Programming Language for Harmonious Integration of Programs and Large Language Model Prompts},\n author={Dong, Honghua and Su, Qidong and Gao, Yubo and Li, Zhaoyu and Ruan, Yangjun and Pekhimenko, Gennady and Maddison, Chris J and Si, Xujie},\n journal={arXiv preprint arXiv:2406.13161},\n year={2024}\n}\n```\n\nWe would like to thank the open-source community for their contributions, where we learned from or used these libraries in our project, including\n[instructor](https://github.com/jxnl/instructor),\n[LiteLLM](https://github.com/BerriAI/litellm),\n[LMQL](https://github.com/eth-sri/lmql),\n[Guidance](https://github.com/guidance-ai/guidance),\n[SGLang](https://github.com/sgl-project/sglang) and\n[autogen](https://github.com/microsoft/autogen).\n\n## License\nThis project is licensed under the terms of the MIT License.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Prompt Programming Language",
"version": "0.1.3",
"project_urls": {
"Homepage": "https://github.com/appl-team/appl"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "24ea93e563abfcc3b1050f056263e51046430fc4c64f0d97f67ab813a7f57a15",
"md5": "22513c0408044124aa9ecc80f9f56967",
"sha256": "5cedb888398a32b1b5d724162488fe927bbd5dfb5980b4db04cf4baa4d24c792"
},
"downloads": -1,
"filename": "applang-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "22513c0408044124aa9ecc80f9f56967",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 75935,
"upload_time": "2024-10-20T22:50:33",
"upload_time_iso_8601": "2024-10-20T22:50:33.588252Z",
"url": "https://files.pythonhosted.org/packages/24/ea/93e563abfcc3b1050f056263e51046430fc4c64f0d97f67ab813a7f57a15/applang-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f058102090cc40ff95e2cb4d742695d219d1c43442bb083012f473182df4eac7",
"md5": "8cd927410dc4024a411ea4480a6b8094",
"sha256": "c5988bcd534e762bcb9a07ce29ae741458c3dfd2ab12c23b4720b0b427618514"
},
"downloads": -1,
"filename": "applang-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "8cd927410dc4024a411ea4480a6b8094",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 73029,
"upload_time": "2024-10-20T22:50:34",
"upload_time_iso_8601": "2024-10-20T22:50:34.876385Z",
"url": "https://files.pythonhosted.org/packages/f0/58/102090cc40ff95e2cb4d742695d219d1c43442bb083012f473182df4eac7/applang-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-20 22:50:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "appl-team",
"github_project": "appl",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "applang"
}