# 🐧 ペンペン (PenPen)
## Developer guide
### Install
To install PenPen from pypi use
```
pip install PenPen-AI
```
To install PenPen from a local copy in editable mode (to perform changes) use
```
pip install -e {path_to_local_copy}
```
### Deploy
steps to deploy on pypi:
```
// install setuptools wheel and twine
pip install setuptools wheel twine
// package the project
python setup.py sdist bdist_wheel
// deploy (make sure to have credentials in ~/.pypirc or to pass as argument)
twine upload dist/*
```
## CLI User Guide
### Install
```
pip install PenPen-AI
```
### Run
```
// run a prompt
prompt-runner run -p {path_to_prompt} -t {path_to_task}
// get usage
prompt-runner --help
```
### Prompt folder structure and some details
PromptRunner is a cli that allows you to run prompt for testing purposes.
It accepts the following arguments:
`-p --prompt`: the folder of the prompt to run
`-t --task`: the task to run for the given prompt, task specific files must be in the prompt directory
`-o --output-dir`: the output directory where to write the response, the response is otherwise saved in the output folder of the executed task
#### Folder structure:
```
{prompt_folder}/
- openai.json # contains the openai client configuration
- persona.md # contains the persona prompt
- task_template.md # contains the task template prompt
- functions.py # (optional) contains the functions to be used for this prompt
- {task}/ # folder of a task
- facts.json (optional) # array of fact items, contains the facts specific to this task
- facts_filter.json (optional) # array of fact tag ids to be filtered
- task_parameter_1.md (optional) # contains the task parameters to be populated in the template
...
- task_paramteer_n.md (optional) # n-th task parameter
```
#### Info on the files
##### openai.json
Contains the openai arguments used for this specific call, all fields are optional except the `model` field:
```
{
"model": "gpt-3.5-turbo-0613" // one of "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0613", "gpt-4-0613"
"max_tokens": 1000 // max tokens for the response
"stream": true // stream the response or not
"temperature": 0.3 // temperature to be used
"top_p": null // omit it when using temperature
"n": 1 // right now only 1 is supported, so can be omitted
"max_retries_after_openai_error": 5 // how many times to retry after an openai error before failing
"retry_delay_seconds": 15 // how many seconds to wait before retrying
}
```
##### persona.md
A markdown template that holds the persona prompt
##### task_template.md
A markdown template that holds the task prompt, to add `task_parameters` use the template syntax `{task_parameter_name}` (check examples for more details).
##### functions.py
A python file containing functions to be used to parse the prompt, check examples for more details, but it is important that the variables: `function_wrappers`, `function_call`, `max_consecutive_function_calls` are defined.
##### {task}/facts.json and {task}/facts_filter.json
`{task}/facts.json` is a json file containing an array of facts to be used for this task, each fact is an object with the following fields:
```
{
"tag": "fact_id", // a unique id for the fact
"content": "fact content", // the fact content
}
```
`{task}/facts_filter.json` is a json file containing an array of fact ids to be filtered, if this file is not present all facts will be used, otherwise only the facts with the ids in the filter will be used.
##### {task}/{task_parameter_name}.md
If there are task parameters in the task template, there must be a file for each task parameter, the file name must be the same as the task parameter name, the file must be a markdown file and it must contain the task parameter content.
### Chain
With the chain command it is possible to chain two prompt run execution, the output of the n-th prompt run will be appended to the task_template of the n+1-th prompt run.
```
prompt-runner chain -p {prompt_path1},{task_name1} {prompt_path2},{task_name2} ... {prompt_pathn},{task_n} -o {output_path}
```
The output for each chain execution will be stored in the folder chain_{timestamp} of the working directory, unless an output directory is specified with the `-o` argument.
Raw data
{
"_id": null,
"home_page": "https://github.com/qurami/PenPen",
"name": "PenPen-AI",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "LLM,GPT,prompting,ufirst",
"author": "Ufirst S.r.l.",
"author_email": "",
"download_url": "",
"platform": null,
"description": "# \ud83d\udc27 \u30da\u30f3\u30da\u30f3 (PenPen)\n\n## Developer guide\n\n### Install\n\nTo install PenPen from pypi use\n\n```\npip install PenPen-AI\n```\n\nTo install PenPen from a local copy in editable mode (to perform changes) use\n\n```\npip install -e {path_to_local_copy}\n```\n\n\n### Deploy\n\nsteps to deploy on pypi:\n\n```\n// install setuptools wheel and twine\npip install setuptools wheel twine \n\n// package the project\npython setup.py sdist bdist_wheel\n\n// deploy (make sure to have credentials in ~/.pypirc or to pass as argument)\ntwine upload dist/*\n```\n\n## CLI User Guide\n\n### Install\n\n```\npip install PenPen-AI\n```\n\n### Run\n\n```\n// run a prompt\nprompt-runner run -p {path_to_prompt} -t {path_to_task}\n\n// get usage\nprompt-runner --help\n```\n\n### Prompt folder structure and some details\n\nPromptRunner is a cli that allows you to run prompt for testing purposes.\nIt accepts the following arguments:\n\n`-p --prompt`: the folder of the prompt to run\n`-t --task`: the task to run for the given prompt, task specific files must be in the prompt directory\n`-o --output-dir`: the output directory where to write the response, the response is otherwise saved in the output folder of the executed task\n\n#### Folder structure:\n\n```\n{prompt_folder}/\n - openai.json # contains the openai client configuration\n - persona.md # contains the persona prompt\n - task_template.md # contains the task template prompt\n - functions.py # (optional) contains the functions to be used for this prompt\n - {task}/ # folder of a task\n - facts.json (optional) # array of fact items, contains the facts specific to this task\n - facts_filter.json (optional) # array of fact tag ids to be filtered\n - task_parameter_1.md (optional) # contains the task parameters to be populated in the template\n ...\n - task_paramteer_n.md (optional) # n-th task parameter\n```\n\n#### Info on the files \n\n##### openai.json\n\nContains the openai arguments used for this specific call, all fields are optional except the `model` field:\n\n```\n{\n \"model\": \"gpt-3.5-turbo-0613\" // one of \"gpt-3.5-turbo-0613\", \"gpt-3.5-turbo-0613\", \"gpt-4-0613\"\n \"max_tokens\": 1000 // max tokens for the response\n \"stream\": true // stream the response or not\n \"temperature\": 0.3 // temperature to be used\n \"top_p\": null // omit it when using temperature\n \"n\": 1 // right now only 1 is supported, so can be omitted\n \"max_retries_after_openai_error\": 5 // how many times to retry after an openai error before failing\n \"retry_delay_seconds\": 15 // how many seconds to wait before retrying\n}\n```\n\n##### persona.md\nA markdown template that holds the persona prompt\n\n##### task_template.md\nA markdown template that holds the task prompt, to add `task_parameters` use the template syntax `{task_parameter_name}` (check examples for more details).\n\n##### functions.py\nA python file containing functions to be used to parse the prompt, check examples for more details, but it is important that the variables: `function_wrappers`, `function_call`, `max_consecutive_function_calls` are defined.\n\n##### {task}/facts.json and {task}/facts_filter.json\n`{task}/facts.json` is a json file containing an array of facts to be used for this task, each fact is an object with the following fields:\n\n```\n{\n \"tag\": \"fact_id\", // a unique id for the fact\n \"content\": \"fact content\", // the fact content\n}\n```\n\n`{task}/facts_filter.json` is a json file containing an array of fact ids to be filtered, if this file is not present all facts will be used, otherwise only the facts with the ids in the filter will be used.\n\n##### {task}/{task_parameter_name}.md\nIf there are task parameters in the task template, there must be a file for each task parameter, the file name must be the same as the task parameter name, the file must be a markdown file and it must contain the task parameter content.\n\n\n### Chain\n\nWith the chain command it is possible to chain two prompt run execution, the output of the n-th prompt run will be appended to the task_template of the n+1-th prompt run.\n\n```\nprompt-runner chain -p {prompt_path1},{task_name1} {prompt_path2},{task_name2} ... {prompt_pathn},{task_n} -o {output_path}\n``` \n\nThe output for each chain execution will be stored in the folder chain_{timestamp} of the working directory, unless an output directory is specified with the `-o` argument.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python package for standardizing prompts for LLMs.",
"version": "0.0.11",
"project_urls": {
"Homepage": "https://github.com/qurami/PenPen"
},
"split_keywords": [
"llm",
"gpt",
"prompting",
"ufirst"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8dff42a5e2c80592c5ec26694851e30ef3184a69fc2887f1f340d80b9a7e66c5",
"md5": "1ac95208bb4dc3198b19a1175cbfa10d",
"sha256": "3c661a6706d6e142b0896344c5e2b7db449a2f166eaea89f19ab901b6a95e915"
},
"downloads": -1,
"filename": "PenPen_AI-0.0.11-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "1ac95208bb4dc3198b19a1175cbfa10d",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": null,
"size": 15788,
"upload_time": "2023-10-24T12:42:58",
"upload_time_iso_8601": "2023-10-24T12:42:58.828258Z",
"url": "https://files.pythonhosted.org/packages/8d/ff/42a5e2c80592c5ec26694851e30ef3184a69fc2887f1f340d80b9a7e66c5/PenPen_AI-0.0.11-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-24 12:42:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "qurami",
"github_project": "PenPen",
"github_not_found": true,
"lcname": "penpen-ai"
}