LLMPrompts


NameLLMPrompts JSON
Version 0.1.3 PyPI version JSON
download
home_pagehttps://github.com/antononcube/Python-packages/tree/main/LLMPrompts
SummaryFacilitating the creation, storage, retrieval, and curation of LLM prompts.
upload_time2023-10-06 01:45:19
maintainer
docs_urlNone
authorAnton Antonov
requires_python>=3.7
license
keywords prompt prompts large language model large language models llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLMPrompts Python package

## In brief

This Python package provides data and functions for facilitating the creation, storage, retrieval, and curation of 
[Large Language Models (LLM) prompts](https://en.wikipedia.org/wiki/Prompt_engineering).

*(Here is a [link to the corresponding notebook](https://github.com/antononcube/Python-packages/blob/main/LLMPrompts/docs/LLM-prompts-usage.ipynb).)*

--------

## Installation

### Install from GitHub

```shell
pip install -e git+https://github.com/antononcube/Python-packages.git#egg=LLMPrompts-antononcube\&subdirectory=LLMPrompts
```

### From PyPi

```shell
pip install LLMPrompts
```

------

## Basic usage examples


Load the packages "LLMPrompts", [AAp1], and "LLMFunctionObjects", [AAp2]:

### Prompt data retrieval

Here the LLM prompt and function packages are loaded: 


```python
from LLMPrompts import *
from LLMFunctionObjects import *
```

Here is a prompt data retrieval using a regex:


```python
llm_prompt_data(r'^N.*e$', fields="Description")
```




    {'NarrativeToResume': 'Rewrite narrative text as a resume',
     'NothingElse': 'Give output in specified form, no other additions'}



Retrieve a prompt with a specified name and related data fields:


```python
llm_prompt_data("Yoda", fields=['Description', "PromptText"])
```




    {'Yoda': ['Respond as Yoda, you will',
      'You are Yoda. \nRespond to ALL inputs in the voice of Yoda from Star Wars. \nBe sure to ALWAYS use his distinctive style and syntax. Vary sentence length.']}



Here is number of all prompt names: 


```python
len(llm_prompt_data())
```




    154



Here is a data frame with all prompts names and descriptions:


```python
import pandas

dfPrompts = pandas.DataFrame([dict(zip(["Name", "Description"], x)) for x in llm_prompt_data(fields=["Name", "Description"]).values()])
dfPrompts
```




<div>
<style scoped>
    .dataframe tbody tr th:only-of-type {
        vertical-align: middle;
    }

    .dataframe tbody tr th {
        vertical-align: top;
    }

    .dataframe thead th {
        text-align: right;
    }
</style>
<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: right;">
      <th></th>
      <th>Name</th>
      <th>Description</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>0</th>
      <td>19thCenturyBritishNovel</td>
      <td>You know that AI could as soon forget you as m...</td>
    </tr>
    <tr>
      <th>1</th>
      <td>AbstractConvert</td>
      <td>Convert text into an abstract</td>
    </tr>
    <tr>
      <th>2</th>
      <td>ActiveVoiceRephrase</td>
      <td>Rephrase text from passive into active voice</td>
    </tr>
    <tr>
      <th>3</th>
      <td>AlternativeHistorian</td>
      <td>Explore alternate versions of history</td>
    </tr>
    <tr>
      <th>4</th>
      <td>AnimalSpeak</td>
      <td>The language of beasts, sort of</td>
    </tr>
    <tr>
      <th>...</th>
      <td>...</td>
      <td>...</td>
    </tr>
    <tr>
      <th>149</th>
      <td>FriendlySnowman</td>
      <td>Chat with a snowman</td>
    </tr>
    <tr>
      <th>150</th>
      <td>HugoAwardWinner</td>
      <td>Write a science fiction novel about climate ch...</td>
    </tr>
    <tr>
      <th>151</th>
      <td>ShortLineIt</td>
      <td>Format text to have shorter lines</td>
    </tr>
    <tr>
      <th>152</th>
      <td>Unhedged</td>
      <td>Rewrite a sentence to be more assertive</td>
    </tr>
    <tr>
      <th>153</th>
      <td>WordGuesser</td>
      <td>Play a word game with AI</td>
    </tr>
  </tbody>
</table>
<p>154 rows × 2 columns</p>
</div>



### Code generating function

Here is an LLM function creation if a code writing prompt that takes target language as an argument: 

```python
fcw = llm_function(llm_prompt("CodeWriterX")("Python"), e='ChatGPT')
fcw.prompt
```

```
'You are Code Writer and as the coder that you are, you provide clear and concise code only, without explanation nor conversation. \nYour job is to output code with no accompanying text.\nDo not explain any code unless asked. Do not provide summaries unless asked.\nYou are the best Python programmer in the world but do not converse.\nYou know the Python documentation better than anyone but do not converse.\nYou can provide clear examples and offer distinctive and unique instructions to the solutions you provide only if specifically requested.\nOnly code in Python unless told otherwise.\nUnless they ask, you will only give code.'
```

Here is a code generation request with that function:


```python
print(fcw("Random walk simulation."))
```

```python
import random

def random_walk(n):
    x, y = 0, 0
    for _ in range(n):
        dx, dy = random.choice([(0,1), (0,-1), (1,0), (-1,0)])
        x += dx
        y += dy
    return (x, y)
```

### Fixing function

Using a function prompt retrieved with "FTFY" over the a misspelled word:


```python
llm_prompt("FTFY")("invokation")
```

```
'Find and correct grammar and spelling mistakes in the following text.\nResponse with the corrected text and nothing else.\nProvide no context for the corrections, only correct the text.\ninvokation'
```

Here is the corresponding LLM function:


```python
fFTFY = llm_function(llm_prompt("FTFY"))
fFTFY("wher was we?")
```

```
'\n\nWhere were we?'
```

Here is modifier prompt with two arguments:

```python
llm_prompt("ShortLineIt")("MAX_CHARS", "TEXT")
```

```
'Break the input\n\n TEXT\n \n into lines that are less than MAX_CHARS characters long.\n Do not otherwise modify the input. Do not add other text.'
```

Here is the corresponding LLM function:

```python
fb = llm_function(llm_prompt("ShortLineIt")("70"))
```

Here is longish text:


```python
text = 'A random walk simulation is a type of simulation that models the behavior of a random walk. A random walk is a mathematical process in which a set of steps is taken in a random order. The steps can be in any direction, and the order of the steps is determined by a random number generator. The random walk simulation is used to model a variety of real-world phenomena, such as the movement of particles in a gas or the behavior of stock prices. The random walk simulation is also used to study the behavior of complex systems, such as the spread of disease or the behavior of traffic on a highway.'
```

Here is the application of "ShortLineIT" applied to the text above:


```python
print(fb(text))
```

    
    
    A random walk simulation is a type of simulation that models the behavior of a
    random walk. A random walk is a mathematical process in which a set of steps is
    taken in a random order. The steps can be in any direction, and the order of the
    steps is determined by a random number generator. The random walk simulation is
    used to model a variety of real-world phenomena, such as the movement of
    particles in a gas or the behavior of stock prices. The random walk simulation
    is also used to study the behavior of complex systems, such as the spread of
    disease or the behavior of traffic on a highway.


### Chat object creation with a prompt

Here a chat object is create with a person prompt:


```python
chatObj = llm_chat(llm_prompt("MadHatter"))
```

Send a message:


```python
chatObj.eval("Who are you?")
```

```
'Ah, my dear curious soul, I am the Mad Hatter, the one and only! A whimsical creature, forever lost in the realm of absurdity and tea time. I am here to entertain and perplex, to dance with words and sprinkle madness in the air. So, tell me, my friend, what brings you to my peculiar tea party today?'
```

Send another message:


```python
chatObj.eval("I want oolong tea. And a chocolate.")
```

```
'Ah, oolong tea, a splendid choice indeed! The leaves unfurl, dancing in the hot water, releasing their delicate flavors into the air. And a chocolate, you say? How delightful! A sweet morsel to accompany the swirling warmth of the tea. But, my dear friend, in this topsy-turvy world of mine, I must ask: do you prefer your chocolate to be dark as the night or as milky as a moonbeam?'
```

-----

## Prompt spec DSL

A more formal description of the Domain Specific Language (DSL) for specifying prompts
has the following elements: 

- Prompt personas can be "addressed" with "@". For example:

```
@Yoda Life can be easy, but some people instist for it to be difficult.
```

- One or several modifier prompts can be specified at the end of the prompt spec. For example:

```
Summer is over, school is coming soon. #HaikuStyled
```

```
Summer is over, school is coming soon. #HaikuStyled #Translated|Russian
```

- Functions can be specified to be applied "cell-wide" with "!" and placing the prompt spec at
  the start of the prompt spec to be expanded. For example:

```
!Translated|Portuguese Summer is over, school is coming soon
```

- Functions can be specified to be applied to "previous" messages with "!" and 
  placing just the prompt with one of the pointers "^" or "^^". 
  The former means "the last message", the latter means "all messages."
    - The messages can be provided with the option argument `:@messages` of `llm-prompt-expand`.
- For example:

```
!ShortLineIt^
```

- Here is a table of prompt expansion specs (more or less the same as the one in [SW1]):

| Spec               | Interpretation                                      |
|:-------------------|:----------------------------------------------------|
| @*name*            | Direct chat to a persona                            |
| #*name*            | Use modifier prompts                                |
| !*name*            | Use function prompt with the input of current cell  |
| !*name*>           | *«same as above»*                                   |
| &*name*>           | *«same as above»*                                   |
| !*name*^           | Use function prompt with previous chat message      |
| !*name*^^          | Use function prompt with all previous chat messages |
| !*name*│*param*... | Include parameters for prompts                      |

**Remark:** The function prompts can have both sigils "!" and "&".

**Remark:** Prompt expansion make the usage of LLM-chatbooks much easier.
See ["JupyterChatbook"](https://pypi.org/project/JupyterChatbook/), [AAp3].


-----

## Implementation notes

### Following Raku implementations

This Python package reuses designs, implementation structures, and prompt data from the Raku package
["LLM::Prompts"](https://raku.land/zef:antononcube/LLM::Prompts), [AAp4].

### Prompt collection

The original (for this package) collection of prompts was a (not small) sample of the prompt texts
hosted at [Wolfram Prompt Repository](https://resources.wolframcloud.com/PromptRepository/) (WPR), [SW2].
All prompts from WPR in the package have the corresponding contributors and URLs to the corresponding WPR pages.  

Example prompts from Google/Bard/PaLM and ~~OpenAI/ChatGPT~~ are added using the format of WPR. 

### Extending the prompt collection

It is essential to have the ability to programmatically add new prompts.
(Not implemented yet -- see the TODO section below.)

### Prompt expansion

Initially prompt DSL grammar and corresponding expansion actions were implemented.
Having a grammar is most likely not needed, though, and it is better to use "prompt expansion" (via regex-based substitutions.)

Prompts can be "just expanded" using the sub `llm-prompt-expand`. 

### Usage in chatbooks

Here is a flowchart that summarizes prompt parsing and expansion in chat cells of Jupyter chatbooks, [AAp3]:

```mermaid
flowchart LR
    OpenAI{{OpenAI}}
    PaLM{{PaLM}}
    LLMFunc[[LLMFunctionObjects]]
    LLMProm[[LLMPrompts]]
    CODB[(Chat objects)]
    PDB[(Prompts)]
    CCell[/Chat cell/]
    CRCell[/Chat result cell/]
    CIDQ{Chat ID<br>specified?}
    CIDEQ{Chat ID<br>exists in DB?}
    RECO[Retrieve existing<br>chat object]
    COEval[Message<br>evaluation]
    PromParse[Prompt<br>DSL spec parsing]
    KPFQ{Known<br>prompts<br>found?}
    PromExp[Prompt<br>expansion]
    CNCO[Create new<br>chat object]
    CIDNone["Assume chat ID<br>is 'NONE'"] 
    subgraph Chatbook frontend    
        CCell
        CRCell
    end
    subgraph Chatbook backend
        CIDQ
        CIDEQ
        CIDNone
        RECO
        CNCO
        CODB
    end
    subgraph Prompt processing
        PDB
        LLMProm
        PromParse
        KPFQ
        PromExp 
    end
    subgraph LLM interaction
      COEval
      LLMFunc
      PaLM
      OpenAI
    end
    CCell --> CIDQ
    CIDQ --> |yes| CIDEQ
    CIDEQ --> |yes| RECO
    RECO --> PromParse
    COEval --> CRCell
    CIDEQ -.- CODB
    CIDEQ --> |no| CNCO
    LLMFunc -.- CNCO -.- CODB
    CNCO --> PromParse --> KPFQ
    KPFQ --> |yes| PromExp
    KPFQ --> |no| COEval
    PromParse -.- LLMProm 
    PromExp -.- LLMProm
    PromExp --> COEval 
    LLMProm -.- PDB
    CIDQ --> |no| CIDNone
    CIDNone --> CIDEQ
    COEval -.- LLMFunc
    LLMFunc <-.-> OpenAI
    LLMFunc <-.-> PaLM
```

Here is an example of prompt expansion in a generic LLM chat cell and chat meta cell 
showing the content of the corresponding chat object:

![]()


-----

## TODO

- [ ] TODO Implementation
  - [X] DONE Prompt retrieval adverbs
  - [X] DONE Prompt spec expansion
  - [ ] TODO Addition of user/local prompts 
    - [ ] TODO Using XDG data directory.
    - [ ] TODO By modifying existing prompts.
    - [ ] TODO Automatic prompt template fill-in.
    - [ ] TODO Guided template fill-in.
      - [ ] TODO DSL based
      - [ ] TODO LLM based
- [ ] TODO Documentation
  - [X] DONE Querying (ingested) prompts
  - [X] DONE Prompt DSL
  - [ ] TODO Prompt format
  - [ ] TODO On hijacking prompts
  - [ ] TODO Diagrams
    - [X] DONE Chatbook usage 
    - [ ] Typical usage


------

## References

### Articles

[AA1] Anton Antonov,
["Workflows with LLM functions"](https://rakuforprediction.wordpress.com/2023/08/01/workflows-with-llm-functions/),
(2023),
[RakuForPrediction at WordPress](https://rakuforprediction.wordpress.com).

[SW1] Stephen Wolfram,
["The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language"](https://writings.stephenwolfram.com/2023/05/the-new-world-of-llm-functions-integrating-llm-technology-into-the-wolfram-language/),
(2023),
[Stephen Wolfram Writings](https://writings.stephenwolfram.com).

[SW2] Stephen Wolfram,
["Prompts for Work & Play: Launching the Wolfram Prompt Repository"](https://writings.stephenwolfram.com/2023/06/prompts-for-work-play-launching-the-wolfram-prompt-repository/),
(2023),
[Stephen Wolfram Writings](https://writings.stephenwolfram.com).

### Packages, paclets, repositories

[AAp1] Anton Antonov,
[LLMPrompts Python package](hhttps://github.com/antononcube/Python-packages/tree/main/LLMPrompts),
(2023),
[Python-packages at GitHub/antononcube](https://github.com/antononcube/Python-packages).

[AAp2] Anton Antonov,
[LLMFunctionObjects Python package](https://github.com/antononcube/Python-packages/tree/main/LLMFunctionObjects),
(2023),
[Python-packages at GitHub/antononcube](https://github.com/antononcube/Python-packages).

[AAp3] Anton Antonov,
[JupyterChatbook Python package](https://github.com/antononcube/Python-JupyterChatbook),
(2023),
[GitHub/antononcube](https://github.com/antononcube).

[AAp4] Anton Antonov,
[LLM::Prompts Raku package](https://github.com/antononcube/Raku-LLM-Prompts),
(2023),
[GitHub/antononcube](https://github.com/antononcube).

[AAp5] Anton Antonov,
[LLM::Functions Raku package](https://github.com/antononcube/Raku-LLM-Functions),
(2023),
[GitHub/antononcube](https://github.com/antononcube).

[AAp6] Anton Antonov,
[Jupyter::Chatbook Raku package](https://github.com/antononcube/Raku-Jupyter-Chatbook),
(2023),
[GitHub/antononcube](https://github.com/antononcube).


[WRIr1] Wolfram Research, Inc.,
[Wolfram Prompt Repository](https://resources.wolframcloud.com/PromptRepository)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/antononcube/Python-packages/tree/main/LLMPrompts",
    "name": "LLMPrompts",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "prompt,prompts,large language model,large language models,llm",
    "author": "Anton Antonov",
    "author_email": "antononcube@posteo.net",
    "download_url": "https://files.pythonhosted.org/packages/b2/05/8aa4bb6442181c1e0430ccfada8d8951bb9e1cb88cec2beea8330407a074/LLMPrompts-0.1.3.tar.gz",
    "platform": null,
    "description": "# LLMPrompts Python package\n\n## In brief\n\nThis Python package provides data and functions for facilitating the creation, storage, retrieval, and curation of \n[Large Language Models (LLM) prompts](https://en.wikipedia.org/wiki/Prompt_engineering).\n\n*(Here is a [link to the corresponding notebook](https://github.com/antononcube/Python-packages/blob/main/LLMPrompts/docs/LLM-prompts-usage.ipynb).)*\n\n--------\n\n## Installation\n\n### Install from GitHub\n\n```shell\npip install -e git+https://github.com/antononcube/Python-packages.git#egg=LLMPrompts-antononcube\\&subdirectory=LLMPrompts\n```\n\n### From PyPi\n\n```shell\npip install LLMPrompts\n```\n\n------\n\n## Basic usage examples\n\n\nLoad the packages \"LLMPrompts\", [AAp1], and \"LLMFunctionObjects\", [AAp2]:\n\n### Prompt data retrieval\n\nHere the LLM prompt and function packages are loaded: \n\n\n```python\nfrom LLMPrompts import *\nfrom LLMFunctionObjects import *\n```\n\nHere is a prompt data retrieval using a regex:\n\n\n```python\nllm_prompt_data(r'^N.*e$', fields=\"Description\")\n```\n\n\n\n\n    {'NarrativeToResume': 'Rewrite narrative text as a resume',\n     'NothingElse': 'Give output in specified form, no other additions'}\n\n\n\nRetrieve a prompt with a specified name and related data fields:\n\n\n```python\nllm_prompt_data(\"Yoda\", fields=['Description', \"PromptText\"])\n```\n\n\n\n\n    {'Yoda': ['Respond as Yoda, you will',\n      'You are Yoda. \\nRespond to ALL inputs in the voice of Yoda from Star Wars. \\nBe sure to ALWAYS use his distinctive style and syntax. Vary sentence length.']}\n\n\n\nHere is number of all prompt names: \n\n\n```python\nlen(llm_prompt_data())\n```\n\n\n\n\n    154\n\n\n\nHere is a data frame with all prompts names and descriptions:\n\n\n```python\nimport pandas\n\ndfPrompts = pandas.DataFrame([dict(zip([\"Name\", \"Description\"], x)) for x in llm_prompt_data(fields=[\"Name\", \"Description\"]).values()])\ndfPrompts\n```\n\n\n\n\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }\n\n    .dataframe tbody tr th {\n        vertical-align: top;\n    }\n\n    .dataframe thead th {\n        text-align: right;\n    }\n</style>\n<table border=\"1\" class=\"dataframe\">\n  <thead>\n    <tr style=\"text-align: right;\">\n      <th></th>\n      <th>Name</th>\n      <th>Description</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <th>0</th>\n      <td>19thCenturyBritishNovel</td>\n      <td>You know that AI could as soon forget you as m...</td>\n    </tr>\n    <tr>\n      <th>1</th>\n      <td>AbstractConvert</td>\n      <td>Convert text into an abstract</td>\n    </tr>\n    <tr>\n      <th>2</th>\n      <td>ActiveVoiceRephrase</td>\n      <td>Rephrase text from passive into active voice</td>\n    </tr>\n    <tr>\n      <th>3</th>\n      <td>AlternativeHistorian</td>\n      <td>Explore alternate versions of history</td>\n    </tr>\n    <tr>\n      <th>4</th>\n      <td>AnimalSpeak</td>\n      <td>The language of beasts, sort of</td>\n    </tr>\n    <tr>\n      <th>...</th>\n      <td>...</td>\n      <td>...</td>\n    </tr>\n    <tr>\n      <th>149</th>\n      <td>FriendlySnowman</td>\n      <td>Chat with a snowman</td>\n    </tr>\n    <tr>\n      <th>150</th>\n      <td>HugoAwardWinner</td>\n      <td>Write a science fiction novel about climate ch...</td>\n    </tr>\n    <tr>\n      <th>151</th>\n      <td>ShortLineIt</td>\n      <td>Format text to have shorter lines</td>\n    </tr>\n    <tr>\n      <th>152</th>\n      <td>Unhedged</td>\n      <td>Rewrite a sentence to be more assertive</td>\n    </tr>\n    <tr>\n      <th>153</th>\n      <td>WordGuesser</td>\n      <td>Play a word game with AI</td>\n    </tr>\n  </tbody>\n</table>\n<p>154 rows \u00d7 2 columns</p>\n</div>\n\n\n\n### Code generating function\n\nHere is an LLM function creation if a code writing prompt that takes target language as an argument: \n\n```python\nfcw = llm_function(llm_prompt(\"CodeWriterX\")(\"Python\"), e='ChatGPT')\nfcw.prompt\n```\n\n```\n'You are Code Writer and as the coder that you are, you provide clear and concise code only, without explanation nor conversation. \\nYour job is to output code with no accompanying text.\\nDo not explain any code unless asked. Do not provide summaries unless asked.\\nYou are the best Python programmer in the world but do not converse.\\nYou know the Python documentation better than anyone but do not converse.\\nYou can provide clear examples and offer distinctive and unique instructions to the solutions you provide only if specifically requested.\\nOnly code in Python unless told otherwise.\\nUnless they ask, you will only give code.'\n```\n\nHere is a code generation request with that function:\n\n\n```python\nprint(fcw(\"Random walk simulation.\"))\n```\n\n```python\nimport random\n\ndef random_walk(n):\n    x, y = 0, 0\n    for _ in range(n):\n        dx, dy = random.choice([(0,1), (0,-1), (1,0), (-1,0)])\n        x += dx\n        y += dy\n    return (x, y)\n```\n\n### Fixing function\n\nUsing a function prompt retrieved with \"FTFY\" over the a misspelled word:\n\n\n```python\nllm_prompt(\"FTFY\")(\"invokation\")\n```\n\n```\n'Find and correct grammar and spelling mistakes in the following text.\\nResponse with the corrected text and nothing else.\\nProvide no context for the corrections, only correct the text.\\ninvokation'\n```\n\nHere is the corresponding LLM function:\n\n\n```python\nfFTFY = llm_function(llm_prompt(\"FTFY\"))\nfFTFY(\"wher was we?\")\n```\n\n```\n'\\n\\nWhere were we?'\n```\n\nHere is modifier prompt with two arguments:\n\n```python\nllm_prompt(\"ShortLineIt\")(\"MAX_CHARS\", \"TEXT\")\n```\n\n```\n'Break the input\\n\\n TEXT\\n \\n into lines that are less than MAX_CHARS characters long.\\n Do not otherwise modify the input. Do not add other text.'\n```\n\nHere is the corresponding LLM function:\n\n```python\nfb = llm_function(llm_prompt(\"ShortLineIt\")(\"70\"))\n```\n\nHere is longish text:\n\n\n```python\ntext = 'A random walk simulation is a type of simulation that models the behavior of a random walk. A random walk is a mathematical process in which a set of steps is taken in a random order. The steps can be in any direction, and the order of the steps is determined by a random number generator. The random walk simulation is used to model a variety of real-world phenomena, such as the movement of particles in a gas or the behavior of stock prices. The random walk simulation is also used to study the behavior of complex systems, such as the spread of disease or the behavior of traffic on a highway.'\n```\n\nHere is the application of \"ShortLineIT\" applied to the text above:\n\n\n```python\nprint(fb(text))\n```\n\n    \n    \n    A random walk simulation is a type of simulation that models the behavior of a\n    random walk. A random walk is a mathematical process in which a set of steps is\n    taken in a random order. The steps can be in any direction, and the order of the\n    steps is determined by a random number generator. The random walk simulation is\n    used to model a variety of real-world phenomena, such as the movement of\n    particles in a gas or the behavior of stock prices. The random walk simulation\n    is also used to study the behavior of complex systems, such as the spread of\n    disease or the behavior of traffic on a highway.\n\n\n### Chat object creation with a prompt\n\nHere a chat object is create with a person prompt:\n\n\n```python\nchatObj = llm_chat(llm_prompt(\"MadHatter\"))\n```\n\nSend a message:\n\n\n```python\nchatObj.eval(\"Who are you?\")\n```\n\n```\n'Ah, my dear curious soul, I am the Mad Hatter, the one and only! A whimsical creature, forever lost in the realm of absurdity and tea time. I am here to entertain and perplex, to dance with words and sprinkle madness in the air. So, tell me, my friend, what brings you to my peculiar tea party today?'\n```\n\nSend another message:\n\n\n```python\nchatObj.eval(\"I want oolong tea. And a chocolate.\")\n```\n\n```\n'Ah, oolong tea, a splendid choice indeed! The leaves unfurl, dancing in the hot water, releasing their delicate flavors into the air. And a chocolate, you say? How delightful! A sweet morsel to accompany the swirling warmth of the tea. But, my dear friend, in this topsy-turvy world of mine, I must ask: do you prefer your chocolate to be dark as the night or as milky as a moonbeam?'\n```\n\n-----\n\n## Prompt spec DSL\n\nA more formal description of the Domain Specific Language (DSL) for specifying prompts\nhas the following elements: \n\n- Prompt personas can be \"addressed\" with \"@\". For example:\n\n```\n@Yoda Life can be easy, but some people instist for it to be difficult.\n```\n\n- One or several modifier prompts can be specified at the end of the prompt spec. For example:\n\n```\nSummer is over, school is coming soon. #HaikuStyled\n```\n\n```\nSummer is over, school is coming soon. #HaikuStyled #Translated|Russian\n```\n\n- Functions can be specified to be applied \"cell-wide\" with \"!\" and placing the prompt spec at\n  the start of the prompt spec to be expanded. For example:\n\n```\n!Translated|Portuguese Summer is over, school is coming soon\n```\n\n- Functions can be specified to be applied to \"previous\" messages with \"!\" and \n  placing just the prompt with one of the pointers \"^\" or \"^^\". \n  The former means \"the last message\", the latter means \"all messages.\"\n    - The messages can be provided with the option argument `:@messages` of `llm-prompt-expand`.\n- For example:\n\n```\n!ShortLineIt^\n```\n\n- Here is a table of prompt expansion specs (more or less the same as the one in [SW1]):\n\n| Spec               | Interpretation                                      |\n|:-------------------|:----------------------------------------------------|\n| @*name*            | Direct chat to a persona                            |\n| #*name*            | Use modifier prompts                                |\n| !*name*            | Use function prompt with the input of current cell  |\n| !*name*>           | *\u00absame as above\u00bb*                                   |\n| &*name*>           | *\u00absame as above\u00bb*                                   |\n| !*name*^           | Use function prompt with previous chat message      |\n| !*name*^^          | Use function prompt with all previous chat messages |\n| !*name*\uffe8*param*... | Include parameters for prompts                      |\n\n**Remark:** The function prompts can have both sigils \"!\" and \"&\".\n\n**Remark:** Prompt expansion make the usage of LLM-chatbooks much easier.\nSee [\"JupyterChatbook\"](https://pypi.org/project/JupyterChatbook/), [AAp3].\n\n\n-----\n\n## Implementation notes\n\n### Following Raku implementations\n\nThis Python package reuses designs, implementation structures, and prompt data from the Raku package\n[\"LLM::Prompts\"](https://raku.land/zef:antononcube/LLM::Prompts), [AAp4].\n\n### Prompt collection\n\nThe original (for this package) collection of prompts was a (not small) sample of the prompt texts\nhosted at [Wolfram Prompt Repository](https://resources.wolframcloud.com/PromptRepository/) (WPR), [SW2].\nAll prompts from WPR in the package have the corresponding contributors and URLs to the corresponding WPR pages.  \n\nExample prompts from Google/Bard/PaLM and ~~OpenAI/ChatGPT~~ are added using the format of WPR. \n\n### Extending the prompt collection\n\nIt is essential to have the ability to programmatically add new prompts.\n(Not implemented yet -- see the TODO section below.)\n\n### Prompt expansion\n\nInitially prompt DSL grammar and corresponding expansion actions were implemented.\nHaving a grammar is most likely not needed, though, and it is better to use \"prompt expansion\" (via regex-based substitutions.)\n\nPrompts can be \"just expanded\" using the sub `llm-prompt-expand`. \n\n### Usage in chatbooks\n\nHere is a flowchart that summarizes prompt parsing and expansion in chat cells of Jupyter chatbooks, [AAp3]:\n\n```mermaid\nflowchart LR\n    OpenAI{{OpenAI}}\n    PaLM{{PaLM}}\n    LLMFunc[[LLMFunctionObjects]]\n    LLMProm[[LLMPrompts]]\n    CODB[(Chat objects)]\n    PDB[(Prompts)]\n    CCell[/Chat cell/]\n    CRCell[/Chat result cell/]\n    CIDQ{Chat ID<br>specified?}\n    CIDEQ{Chat ID<br>exists in DB?}\n    RECO[Retrieve existing<br>chat object]\n    COEval[Message<br>evaluation]\n    PromParse[Prompt<br>DSL spec parsing]\n    KPFQ{Known<br>prompts<br>found?}\n    PromExp[Prompt<br>expansion]\n    CNCO[Create new<br>chat object]\n    CIDNone[\"Assume chat ID<br>is 'NONE'\"] \n    subgraph Chatbook frontend    \n        CCell\n        CRCell\n    end\n    subgraph Chatbook backend\n        CIDQ\n        CIDEQ\n        CIDNone\n        RECO\n        CNCO\n        CODB\n    end\n    subgraph Prompt processing\n        PDB\n        LLMProm\n        PromParse\n        KPFQ\n        PromExp \n    end\n    subgraph LLM interaction\n      COEval\n      LLMFunc\n      PaLM\n      OpenAI\n    end\n    CCell --> CIDQ\n    CIDQ --> |yes| CIDEQ\n    CIDEQ --> |yes| RECO\n    RECO --> PromParse\n    COEval --> CRCell\n    CIDEQ -.- CODB\n    CIDEQ --> |no| CNCO\n    LLMFunc -.- CNCO -.- CODB\n    CNCO --> PromParse --> KPFQ\n    KPFQ --> |yes| PromExp\n    KPFQ --> |no| COEval\n    PromParse -.- LLMProm \n    PromExp -.- LLMProm\n    PromExp --> COEval \n    LLMProm -.- PDB\n    CIDQ --> |no| CIDNone\n    CIDNone --> CIDEQ\n    COEval -.- LLMFunc\n    LLMFunc <-.-> OpenAI\n    LLMFunc <-.-> PaLM\n```\n\nHere is an example of prompt expansion in a generic LLM chat cell and chat meta cell \nshowing the content of the corresponding chat object:\n\n![]()\n\n\n-----\n\n## TODO\n\n- [ ] TODO Implementation\n  - [X] DONE Prompt retrieval adverbs\n  - [X] DONE Prompt spec expansion\n  - [ ] TODO Addition of user/local prompts \n    - [ ] TODO Using XDG data directory.\n    - [ ] TODO By modifying existing prompts.\n    - [ ] TODO Automatic prompt template fill-in.\n    - [ ] TODO Guided template fill-in.\n      - [ ] TODO DSL based\n      - [ ] TODO LLM based\n- [ ] TODO Documentation\n  - [X] DONE Querying (ingested) prompts\n  - [X] DONE Prompt DSL\n  - [ ] TODO Prompt format\n  - [ ] TODO On hijacking prompts\n  - [ ] TODO Diagrams\n    - [X] DONE Chatbook usage \n    - [ ] Typical usage\n\n\n------\n\n## References\n\n### Articles\n\n[AA1] Anton Antonov,\n[\"Workflows with LLM functions\"](https://rakuforprediction.wordpress.com/2023/08/01/workflows-with-llm-functions/),\n(2023),\n[RakuForPrediction at WordPress](https://rakuforprediction.wordpress.com).\n\n[SW1] Stephen Wolfram,\n[\"The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language\"](https://writings.stephenwolfram.com/2023/05/the-new-world-of-llm-functions-integrating-llm-technology-into-the-wolfram-language/),\n(2023),\n[Stephen Wolfram Writings](https://writings.stephenwolfram.com).\n\n[SW2] Stephen Wolfram,\n[\"Prompts for Work & Play: Launching the Wolfram Prompt Repository\"](https://writings.stephenwolfram.com/2023/06/prompts-for-work-play-launching-the-wolfram-prompt-repository/),\n(2023),\n[Stephen Wolfram Writings](https://writings.stephenwolfram.com).\n\n### Packages, paclets, repositories\n\n[AAp1] Anton Antonov,\n[LLMPrompts Python package](hhttps://github.com/antononcube/Python-packages/tree/main/LLMPrompts),\n(2023),\n[Python-packages at GitHub/antononcube](https://github.com/antononcube/Python-packages).\n\n[AAp2] Anton Antonov,\n[LLMFunctionObjects Python package](https://github.com/antononcube/Python-packages/tree/main/LLMFunctionObjects),\n(2023),\n[Python-packages at GitHub/antononcube](https://github.com/antononcube/Python-packages).\n\n[AAp3] Anton Antonov,\n[JupyterChatbook Python package](https://github.com/antononcube/Python-JupyterChatbook),\n(2023),\n[GitHub/antononcube](https://github.com/antononcube).\n\n[AAp4] Anton Antonov,\n[LLM::Prompts Raku package](https://github.com/antononcube/Raku-LLM-Prompts),\n(2023),\n[GitHub/antononcube](https://github.com/antononcube).\n\n[AAp5] Anton Antonov,\n[LLM::Functions Raku package](https://github.com/antononcube/Raku-LLM-Functions),\n(2023),\n[GitHub/antononcube](https://github.com/antononcube).\n\n[AAp6] Anton Antonov,\n[Jupyter::Chatbook Raku package](https://github.com/antononcube/Raku-Jupyter-Chatbook),\n(2023),\n[GitHub/antononcube](https://github.com/antononcube).\n\n\n[WRIr1] Wolfram Research, Inc.,\n[Wolfram Prompt Repository](https://resources.wolframcloud.com/PromptRepository)\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Facilitating the creation, storage, retrieval, and curation of LLM prompts.",
    "version": "0.1.3",
    "project_urls": {
        "Homepage": "https://github.com/antononcube/Python-packages/tree/main/LLMPrompts"
    },
    "split_keywords": [
        "prompt",
        "prompts",
        "large language model",
        "large language models",
        "llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "020b5534a76abfc5271deda1ac96952429c35834f3d1e97f062ec6f44fb48b24",
                "md5": "a9efb5e4cad5462928682268c89bdfbf",
                "sha256": "12ce064c7df7bbb892ae18e60c7f649037aa4c0c6469af3792c00764cc84cd43"
            },
            "downloads": -1,
            "filename": "LLMPrompts-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a9efb5e4cad5462928682268c89bdfbf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 46541,
            "upload_time": "2023-10-06T01:45:17",
            "upload_time_iso_8601": "2023-10-06T01:45:17.436354Z",
            "url": "https://files.pythonhosted.org/packages/02/0b/5534a76abfc5271deda1ac96952429c35834f3d1e97f062ec6f44fb48b24/LLMPrompts-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b2058aa4bb6442181c1e0430ccfada8d8951bb9e1cb88cec2beea8330407a074",
                "md5": "a66f1c9f342dfeb75c26d9e18e2d9c83",
                "sha256": "04381422aab6da02db794ba424bdb9281e6dd7f803c7b5d65e81a7ef3f15748c"
            },
            "downloads": -1,
            "filename": "LLMPrompts-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "a66f1c9f342dfeb75c26d9e18e2d9c83",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 45994,
            "upload_time": "2023-10-06T01:45:19",
            "upload_time_iso_8601": "2023-10-06T01:45:19.505378Z",
            "url": "https://files.pythonhosted.org/packages/b2/05/8aa4bb6442181c1e0430ccfada8d8951bb9e1cb88cec2beea8330407a074/LLMPrompts-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-06 01:45:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "antononcube",
    "github_project": "Python-packages",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "llmprompts"
}
        
Elapsed time: 0.14801s