proompter


Nameproompter JSON
Version 0.0.2 PyPI version JSON
download
home_pageNone
SummarySimple wrapper around some Llm handlers.
upload_time2024-08-05 05:10:30
maintainerNone
docs_urlNone
authorKyrylo Mordan
requires_pythonNone
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Proompter

Proompter

Wrapper for llm calls, meant for experimentation with different prompt and history 
handling strategies.

```python
import os
import sys
from dotenv import load_dotenv
load_dotenv("../../.local.env")
sys.path.append('../')
from proompter import Proompter
```

    /home/kyriosskia/miniconda3/envs/testenv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
      from .autonotebook import tqdm as notebook_tqdm


### 1. Initializing instance

Proompter consists of multiple dependecies, which could be initialized and passed to the class externally or parameters could be passed for class to initialize them.

These include:

- LLM handler: makes calls to llm
- Prompt handler: prepares input based on templates
- Prompt strategy handler: contains ways to call llm handler with selected strategy
- Tokenizer handler: tokenizes text


```python
llm_handler = Proompter(
  # parameters to be passed to provided llm handler
  llm_h_params = {
    'model_name' : 'llama3',
    'connection_string' : 'http://localhost:11434',
    'kwargs' : {}
  },
  # parameters to be passed to provided prompt handler
  prompt_h_params = {
    'template' : {
        "system" : "{content}",
        "assistant" : "{content}",
        "user" : "{content}"
    }
  },
  # parameters to be passed to provided call strategy handler
  call_strategy_h_params = {
    'strategy_name' : None,
    'strategy_params' : {}
  },
  # parameters to be passed to tokenizer handler
  tokenizer_h_params = {
    'access_token' : os.getenv("HF_ACCESS_TOKEN"),
    'tokenizer_name' :"meta-llama/Meta-Llama-3-8B"
  }

)
```

    The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
    Token is valid (permission: read).
    Your token has been saved to /home/kyriosskia/.cache/huggingface/token
    Login successful


### 2. Chat methods

Methods for working with chat variants of models.

#### 2.1 Essential chat method

Calls llm handler with provided messages, prepared based on provided template, with selected prompt strategy.


```python
messages = [{'role': 'user', 'content': 'Why is the sky blue?'}]

response = await llm_handler.prompt_chat(
  # required
  messages = messages,
  # optinal, overwrites parameters passed to handlers
  model_name = "llama3",
  call_strategy_name = "last_call",
  call_strategy_params = { 'n_calls' : 1},
  prompt_templates = {
        "system" : "{content}",
        "assistant" : "{content}",
        "user" : "{content}"
    }
)
response
```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"
    /home/kyriosskia/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
      warnings.warn(





    {'model': 'llama3',
     'created_at': '2024-08-05T04:57:00.847696577Z',
     'message': {'role': 'assistant',
      'content': "The sky appears blue because of a phenomenon called scattering, which involves the interaction between light, tiny molecules in the atmosphere, and our eyes. Here's a simplified explanation:\n\n1. **Sunlight**: When sunlight enters Earth's atmosphere, it consists of all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).\n2. **Molecules**: The atmosphere is made up of tiny molecules like nitrogen (N2) and oxygen (O2). These molecules are much smaller than the wavelength of light.\n3. **Scattering**: When sunlight hits these molecules, it scatters in all directions. This scattering effect is more pronounced for shorter wavelengths of light, such as blue and violet.\n4. **Blue dominance**: As a result of this scattering, the shorter wavelengths of light (blue and violet) are dispersed throughout the atmosphere, reaching our eyes from multiple angles. This is why the sky appears blue during the daytime, especially when the sun is overhead.\n5. **Other factors**: The color of the sky can be influenced by other atmospheric conditions, such as:\n\t* Dust and pollution particles: These can scatter light in various ways, changing the apparent color of the sky.\n\t* Water vapor: High humidity can cause the sky to appear more hazy or gray.\n\t* Clouds: Clouds can reflect and absorb light, affecting the overall color of the sky.\n\nSo, to summarize, the blue color we see in the sky is primarily due to the scattering of sunlight by tiny molecules in the atmosphere. The shorter wavelengths of light (blue and violet) are preferentially scattered, giving our skies their familiar blue hue."},
     'done_reason': 'stop',
     'done': True,
     'total_duration': 2893315343,
     'load_duration': 763924,
     'prompt_eval_count': 11,
     'prompt_eval_duration': 22698000,
     'eval_count': 339,
     'eval_duration': 2745388000,
     'response_time': 2.8988137245178223,
     'messages': [{'role': 'user', 'content': 'Why is the sky blue?'},
      {'role': 'assistant',
       'content': "The sky appears blue because of a phenomenon called scattering, which involves the interaction between light, tiny molecules in the atmosphere, and our eyes. Here's a simplified explanation:\n\n1. **Sunlight**: When sunlight enters Earth's atmosphere, it consists of all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).\n2. **Molecules**: The atmosphere is made up of tiny molecules like nitrogen (N2) and oxygen (O2). These molecules are much smaller than the wavelength of light.\n3. **Scattering**: When sunlight hits these molecules, it scatters in all directions. This scattering effect is more pronounced for shorter wavelengths of light, such as blue and violet.\n4. **Blue dominance**: As a result of this scattering, the shorter wavelengths of light (blue and violet) are dispersed throughout the atmosphere, reaching our eyes from multiple angles. This is why the sky appears blue during the daytime, especially when the sun is overhead.\n5. **Other factors**: The color of the sky can be influenced by other atmospheric conditions, such as:\n\t* Dust and pollution particles: These can scatter light in various ways, changing the apparent color of the sky.\n\t* Water vapor: High humidity can cause the sky to appear more hazy or gray.\n\t* Clouds: Clouds can reflect and absorb light, affecting the overall color of the sky.\n\nSo, to summarize, the blue color we see in the sky is primarily due to the scattering of sunlight by tiny molecules in the atmosphere. The shorter wavelengths of light (blue and violet) are preferentially scattered, giving our skies their familiar blue hue."}],
     'input_tokens': 378,
     'output_tokens': 338,
     'total_tokens': 716}



#### 2.2 Calling chat method in parallel

Same as prompt_chat, but messages are called in parallel and instead of one, multiple responses provided.


```python
messages = [
   [{'role': 'system', 'content': 'You are answering all requests with "HODOR"'}, 
   {'role': 'user', 'content': 'Why is the sky blue?'}],
   [{'role': 'user', 'content': 'Compose a small poem about blue skies.'}]
]

responses = await llm_handler.prompt_chat_parallel(
  # required
  messages = messages
  # optinal, overwrites parameters passed to handlers
  # same as prompt_chat
)

for response in responses:
  print("\n ### \n")
  print(response['message']['content'])

```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"
    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"


    
     ### 
    
    HODOR
    
     ### 
    
    Blue skies, so calm and bright
    A canvas of serenity in sight
    No clouds to mar the view
    Just endless blue, pure and true
    
    The sun shines down with gentle might
    Warming the earth, banishing night
    A blue expanse that's free from fear
    Invigorating all who draw near


### 2.3 Chatting with llm handler

Calls prompt_chat with recorded history, so that each time chat method is called, previous messaged do not need to be provided. (History handler will be added later)


```python
answer = await llm_handler.chat(
    prompt = "Hi, my name is Kyrios, what is yours?",
    # optional to reset history
    new_dialog = True
)

print(answer)
```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"


    Nice to meet you, Kyrios! I'm LLaMA, an AI assistant developed by Meta AI that can understand and respond to human input in a conversational manner. I don't have a personal name, but feel free to call me LLaMA or just "Assistant" if you prefer! What brings you here today?



```python
answer = await llm_handler.chat(
    prompt = "Could you pls remind me my name?"
)

print(answer)
```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"


    Your name is Kyrios, isn't it?


Streaming variant is also available.


```python
generator = llm_handler.chat_stream(
    prompt = "Could you pls remind me my name?"
)
async for message in generator:
    print(message)

```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"


    I
     already
     did
    !
     Your
     name
     is
     Ky
    rios
    .
    


### 3. Instruct methods

Methods for working with instruct variants of models.

#### 3.1 Essential instruct method


```python
prompt = '2+2='

response = await llm_handler.prompt_instruct(
  # required
  prompt = prompt,
  # optinal, overwrites parameters passed to handlers
  model_name = "llama3",
  call_strategy_name = "last_call",
  call_strategy_params = { 'n_calls' : 1}
)
response
```

    HTTP Request: POST http://localhost:11434/api/generate "HTTP/1.1 200 OK"





    {'model': 'llama3',
     'created_at': '2024-08-05T04:57:53.582111792Z',
     'response': '4',
     'done': True,
     'done_reason': 'stop',
     'context': [128006,
      882,
      128007,
      271,
      17,
      10,
      17,
      28,
      128009,
      128006,
      78191,
      128007,
      271,
      19,
      128009],
     'total_duration': 146273731,
     'load_duration': 1145419,
     'prompt_eval_count': 9,
     'prompt_eval_duration': 12819000,
     'eval_count': 2,
     'eval_duration': 8360000,
     'response_time': 0.19240951538085938,
     'input_tokens': 4,
     'output_tokens': 1,
     'total_tokens': 5}



#### 3.2 Calling instuct in parallel


```python
prompts = ["2+2=",
            "Define color in one sentence."]

responses = await llm_handler.prompt_instruct_parallel(
    prompts = prompts
    # optinal, overwrites parameters passed to handlers
    # same as prompt_instruct
    )

for response in responses:
  print("\n ### \n")
  print(response['response'])
```

    HTTP Request: POST http://localhost:11434/api/generate "HTTP/1.1 200 OK"
    HTTP Request: POST http://localhost:11434/api/generate "HTTP/1.1 200 OK"


    
     ### 
    
    4
    
     ### 
    
    Color is a form of electromagnetic radiation, perceived by the human eye and brain as a quality that can be perceived as hue, saturation, and brightness, which allows us to distinguish between different wavelengths or frequencies of light.


### 4. Prompt templates

Sometimes it can useful to process inputs and outputs according to certain template, for example adding some kind of header to every user prompt or making better structured output for history. Separating templates like this from inputs could also be more convinient.


```python
default_prompt_template = {
        "system" : "{content}",
        "assistant" : "{content}",
        "user" : "{content}"
    }

messages = [
    {'role': 'system', 
     'content': """You are helpful assistant that answers to everything bliefly with one sentence. 
     All of you responses are only in latin."""},
    {'role': 'user', 
     'content': 'Why is the sky blue?'}]

response = await llm_handler.prompt_chat(
  # required
  messages = messages,
  # optinal, overwrites parameters passed to handlers
  prompt_templates = default_prompt_template
)

print(response['message']['content'])
```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"


    "Caelum caeruleum est, quia solis radii cum aquis et aeribus permixti lucem refrenant."



```python
alt_prompt_template = {
        "system" : """All of your answers if not in english, must contain tranlations.
        {content}""",
        "assistant" : "My answer: {content}",
        "user" : "{content}"
    }

messages = [
    {'role': 'system', 
     'content': """You are helpful assistant that answers to everything bliefly with one sentence. 
     All of you responses are in latin."""},
    {'role': 'user', 
     'content': 'Why is the sky blue?'}]

response = await llm_handler.prompt_chat(
  # required
  messages = messages,
  # optinal, overwrites parameters passed to handlers
  prompt_templates = alt_prompt_template
)

response['messages']
```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"





    [{'role': 'system',
      'content': 'All of your answers if not in english, must contain tranlations.\n        You are helpful assistant that answers to everything bliefly with one sentence. \n     All of you responses are in latin.'},
     {'role': 'user', 'content': 'Why is the sky blue?'},
     {'role': 'assistant',
      'content': 'My answer: "Caelum caeruleum est, quia solis radii, qui per atmosphaeram transmitterentur, scatteringem lucis efficaciter faciunt."\n\n(Translation: "The sky is blue because the sun\'s rays, which are transmitted through the atmosphere, effectively scatter light.")'}]



### 5. Prompt call strategies

Sometimes making multiple calls for the same prompt can be useful. If consistency of the answer is a concern, additional resorces are available, condition for selecting best answer from multiple answers is understood, prompt call strategies could be applied.

Example of strategies:

- `most common output of 3` : calls 3 times, uses sim search to select most common.
- `min output length` : return response with minimal output length (calls at least 2 times)
- `max output length` : return response with maximal output length (calls at least 2 times)
- `last output` : no matter how many calls, always selects last output


```python
default_prompt_strategy = {
  'call_strategy_name' : "min_output_length",
  'call_strategy_params' :{ 'n_calls' : 3}
}

messages = [
    {'role': 'system', 
     'content': """You are helpful assistant that answers to everything bliefly with one sentence."""},
    {'role': 'user', 
     'content': 'Make a poem about clouds.'}]

response = await llm_handler.prompt_chat(
  # required
  messages = messages,
  # optinal, overwrites parameters passed to handlers
  **default_prompt_strategy
)

print(response['message']['content'])
```

    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"
    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"
    HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK"


    Soft and fluffy, drifting by, clouds shape-shift in the sky.



```python
for resp in llm_handler.call_strategy_h.last_responses:

    print(resp['message']['content'])
    print("------------------------")
```

    Soft and fluffy, drifting by, clouds shape-shift in the sky.
    ------------------------
    Soft and white, they drift by day, whispers of the sky's gentle sway.
    ------------------------
    Across the sky, soft whispers play as wispy clouds drift by, shaping sunbeams into golden rays.
    ------------------------


### 6. Other methods

#### Estimate tokens


```python
llm_handler.estimate_tokens(
    text='Your first question was: "Why is the sky blue?"'
    )
```




    12



            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "proompter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Kyrylo Mordan",
    "author_email": "parachute.repo@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/43/3d/37af568ec503859c8d03f49f2910c64489ca801d0c101fe52d467da039b8/proompter-0.0.2.tar.gz",
    "platform": null,
    "description": "# Proompter\n\nProompter\n\nWrapper for llm calls, meant for experimentation with different prompt and history \nhandling strategies.\n\n```python\nimport os\nimport sys\nfrom dotenv import load_dotenv\nload_dotenv(\"../../.local.env\")\nsys.path.append('../')\nfrom proompter import Proompter\n```\n\n    /home/kyriosskia/miniconda3/envs/testenv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n      from .autonotebook import tqdm as notebook_tqdm\n\n\n### 1. Initializing instance\n\nProompter consists of multiple dependecies, which could be initialized and passed to the class externally or parameters could be passed for class to initialize them.\n\nThese include:\n\n- LLM handler: makes calls to llm\n- Prompt handler: prepares input based on templates\n- Prompt strategy handler: contains ways to call llm handler with selected strategy\n- Tokenizer handler: tokenizes text\n\n\n```python\nllm_handler = Proompter(\n  # parameters to be passed to provided llm handler\n  llm_h_params = {\n    'model_name' : 'llama3',\n    'connection_string' : 'http://localhost:11434',\n    'kwargs' : {}\n  },\n  # parameters to be passed to provided prompt handler\n  prompt_h_params = {\n    'template' : {\n        \"system\" : \"{content}\",\n        \"assistant\" : \"{content}\",\n        \"user\" : \"{content}\"\n    }\n  },\n  # parameters to be passed to provided call strategy handler\n  call_strategy_h_params = {\n    'strategy_name' : None,\n    'strategy_params' : {}\n  },\n  # parameters to be passed to tokenizer handler\n  tokenizer_h_params = {\n    'access_token' : os.getenv(\"HF_ACCESS_TOKEN\"),\n    'tokenizer_name' :\"meta-llama/Meta-Llama-3-8B\"\n  }\n\n)\n```\n\n    The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.\n    Token is valid (permission: read).\n    Your token has been saved to /home/kyriosskia/.cache/huggingface/token\n    Login successful\n\n\n### 2. Chat methods\n\nMethods for working with chat variants of models.\n\n#### 2.1 Essential chat method\n\nCalls llm handler with provided messages, prepared based on provided template, with selected prompt strategy.\n\n\n```python\nmessages = [{'role': 'user', 'content': 'Why is the sky blue?'}]\n\nresponse = await llm_handler.prompt_chat(\n  # required\n  messages = messages,\n  # optinal, overwrites parameters passed to handlers\n  model_name = \"llama3\",\n  call_strategy_name = \"last_call\",\n  call_strategy_params = { 'n_calls' : 1},\n  prompt_templates = {\n        \"system\" : \"{content}\",\n        \"assistant\" : \"{content}\",\n        \"user\" : \"{content}\"\n    }\n)\nresponse\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n    /home/kyriosskia/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.\n      warnings.warn(\n\n\n\n\n\n    {'model': 'llama3',\n     'created_at': '2024-08-05T04:57:00.847696577Z',\n     'message': {'role': 'assistant',\n      'content': \"The sky appears blue because of a phenomenon called scattering, which involves the interaction between light, tiny molecules in the atmosphere, and our eyes. Here's a simplified explanation:\\n\\n1. **Sunlight**: When sunlight enters Earth's atmosphere, it consists of all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).\\n2. **Molecules**: The atmosphere is made up of tiny molecules like nitrogen (N2) and oxygen (O2). These molecules are much smaller than the wavelength of light.\\n3. **Scattering**: When sunlight hits these molecules, it scatters in all directions. This scattering effect is more pronounced for shorter wavelengths of light, such as blue and violet.\\n4. **Blue dominance**: As a result of this scattering, the shorter wavelengths of light (blue and violet) are dispersed throughout the atmosphere, reaching our eyes from multiple angles. This is why the sky appears blue during the daytime, especially when the sun is overhead.\\n5. **Other factors**: The color of the sky can be influenced by other atmospheric conditions, such as:\\n\\t* Dust and pollution particles: These can scatter light in various ways, changing the apparent color of the sky.\\n\\t* Water vapor: High humidity can cause the sky to appear more hazy or gray.\\n\\t* Clouds: Clouds can reflect and absorb light, affecting the overall color of the sky.\\n\\nSo, to summarize, the blue color we see in the sky is primarily due to the scattering of sunlight by tiny molecules in the atmosphere. The shorter wavelengths of light (blue and violet) are preferentially scattered, giving our skies their familiar blue hue.\"},\n     'done_reason': 'stop',\n     'done': True,\n     'total_duration': 2893315343,\n     'load_duration': 763924,\n     'prompt_eval_count': 11,\n     'prompt_eval_duration': 22698000,\n     'eval_count': 339,\n     'eval_duration': 2745388000,\n     'response_time': 2.8988137245178223,\n     'messages': [{'role': 'user', 'content': 'Why is the sky blue?'},\n      {'role': 'assistant',\n       'content': \"The sky appears blue because of a phenomenon called scattering, which involves the interaction between light, tiny molecules in the atmosphere, and our eyes. Here's a simplified explanation:\\n\\n1. **Sunlight**: When sunlight enters Earth's atmosphere, it consists of all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).\\n2. **Molecules**: The atmosphere is made up of tiny molecules like nitrogen (N2) and oxygen (O2). These molecules are much smaller than the wavelength of light.\\n3. **Scattering**: When sunlight hits these molecules, it scatters in all directions. This scattering effect is more pronounced for shorter wavelengths of light, such as blue and violet.\\n4. **Blue dominance**: As a result of this scattering, the shorter wavelengths of light (blue and violet) are dispersed throughout the atmosphere, reaching our eyes from multiple angles. This is why the sky appears blue during the daytime, especially when the sun is overhead.\\n5. **Other factors**: The color of the sky can be influenced by other atmospheric conditions, such as:\\n\\t* Dust and pollution particles: These can scatter light in various ways, changing the apparent color of the sky.\\n\\t* Water vapor: High humidity can cause the sky to appear more hazy or gray.\\n\\t* Clouds: Clouds can reflect and absorb light, affecting the overall color of the sky.\\n\\nSo, to summarize, the blue color we see in the sky is primarily due to the scattering of sunlight by tiny molecules in the atmosphere. The shorter wavelengths of light (blue and violet) are preferentially scattered, giving our skies their familiar blue hue.\"}],\n     'input_tokens': 378,\n     'output_tokens': 338,\n     'total_tokens': 716}\n\n\n\n#### 2.2 Calling chat method in parallel\n\nSame as prompt_chat, but messages are called in parallel and instead of one, multiple responses provided.\n\n\n```python\nmessages = [\n   [{'role': 'system', 'content': 'You are answering all requests with \"HODOR\"'}, \n   {'role': 'user', 'content': 'Why is the sky blue?'}],\n   [{'role': 'user', 'content': 'Compose a small poem about blue skies.'}]\n]\n\nresponses = await llm_handler.prompt_chat_parallel(\n  # required\n  messages = messages\n  # optinal, overwrites parameters passed to handlers\n  # same as prompt_chat\n)\n\nfor response in responses:\n  print(\"\\n ### \\n\")\n  print(response['message']['content'])\n\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n\n\n    \n     ### \n    \n    HODOR\n    \n     ### \n    \n    Blue skies, so calm and bright\n    A canvas of serenity in sight\n    No clouds to mar the view\n    Just endless blue, pure and true\n    \n    The sun shines down with gentle might\n    Warming the earth, banishing night\n    A blue expanse that's free from fear\n    Invigorating all who draw near\n\n\n### 2.3 Chatting with llm handler\n\nCalls prompt_chat with recorded history, so that each time chat method is called, previous messaged do not need to be provided. (History handler will be added later)\n\n\n```python\nanswer = await llm_handler.chat(\n    prompt = \"Hi, my name is Kyrios, what is yours?\",\n    # optional to reset history\n    new_dialog = True\n)\n\nprint(answer)\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n\n\n    Nice to meet you, Kyrios! I'm LLaMA, an AI assistant developed by Meta AI that can understand and respond to human input in a conversational manner. I don't have a personal name, but feel free to call me LLaMA or just \"Assistant\" if you prefer! What brings you here today?\n\n\n\n```python\nanswer = await llm_handler.chat(\n    prompt = \"Could you pls remind me my name?\"\n)\n\nprint(answer)\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n\n\n    Your name is Kyrios, isn't it?\n\n\nStreaming variant is also available.\n\n\n```python\ngenerator = llm_handler.chat_stream(\n    prompt = \"Could you pls remind me my name?\"\n)\nasync for message in generator:\n    print(message)\n\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n\n\n    I\n     already\n     did\n    !\n     Your\n     name\n     is\n     Ky\n    rios\n    .\n    \n\n\n### 3. Instruct methods\n\nMethods for working with instruct variants of models.\n\n#### 3.1 Essential instruct method\n\n\n```python\nprompt = '2+2='\n\nresponse = await llm_handler.prompt_instruct(\n  # required\n  prompt = prompt,\n  # optinal, overwrites parameters passed to handlers\n  model_name = \"llama3\",\n  call_strategy_name = \"last_call\",\n  call_strategy_params = { 'n_calls' : 1}\n)\nresponse\n```\n\n    HTTP Request: POST http://localhost:11434/api/generate \"HTTP/1.1 200 OK\"\n\n\n\n\n\n    {'model': 'llama3',\n     'created_at': '2024-08-05T04:57:53.582111792Z',\n     'response': '4',\n     'done': True,\n     'done_reason': 'stop',\n     'context': [128006,\n      882,\n      128007,\n      271,\n      17,\n      10,\n      17,\n      28,\n      128009,\n      128006,\n      78191,\n      128007,\n      271,\n      19,\n      128009],\n     'total_duration': 146273731,\n     'load_duration': 1145419,\n     'prompt_eval_count': 9,\n     'prompt_eval_duration': 12819000,\n     'eval_count': 2,\n     'eval_duration': 8360000,\n     'response_time': 0.19240951538085938,\n     'input_tokens': 4,\n     'output_tokens': 1,\n     'total_tokens': 5}\n\n\n\n#### 3.2 Calling instuct in parallel\n\n\n```python\nprompts = [\"2+2=\",\n            \"Define color in one sentence.\"]\n\nresponses = await llm_handler.prompt_instruct_parallel(\n    prompts = prompts\n    # optinal, overwrites parameters passed to handlers\n    # same as prompt_instruct\n    )\n\nfor response in responses:\n  print(\"\\n ### \\n\")\n  print(response['response'])\n```\n\n    HTTP Request: POST http://localhost:11434/api/generate \"HTTP/1.1 200 OK\"\n    HTTP Request: POST http://localhost:11434/api/generate \"HTTP/1.1 200 OK\"\n\n\n    \n     ### \n    \n    4\n    \n     ### \n    \n    Color is a form of electromagnetic radiation, perceived by the human eye and brain as a quality that can be perceived as hue, saturation, and brightness, which allows us to distinguish between different wavelengths or frequencies of light.\n\n\n### 4. Prompt templates\n\nSometimes it can useful to process inputs and outputs according to certain template, for example adding some kind of header to every user prompt or making better structured output for history. Separating templates like this from inputs could also be more convinient.\n\n\n```python\ndefault_prompt_template = {\n        \"system\" : \"{content}\",\n        \"assistant\" : \"{content}\",\n        \"user\" : \"{content}\"\n    }\n\nmessages = [\n    {'role': 'system', \n     'content': \"\"\"You are helpful assistant that answers to everything bliefly with one sentence. \n     All of you responses are only in latin.\"\"\"},\n    {'role': 'user', \n     'content': 'Why is the sky blue?'}]\n\nresponse = await llm_handler.prompt_chat(\n  # required\n  messages = messages,\n  # optinal, overwrites parameters passed to handlers\n  prompt_templates = default_prompt_template\n)\n\nprint(response['message']['content'])\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n\n\n    \"Caelum caeruleum est, quia solis radii cum aquis et aeribus permixti lucem refrenant.\"\n\n\n\n```python\nalt_prompt_template = {\n        \"system\" : \"\"\"All of your answers if not in english, must contain tranlations.\n        {content}\"\"\",\n        \"assistant\" : \"My answer: {content}\",\n        \"user\" : \"{content}\"\n    }\n\nmessages = [\n    {'role': 'system', \n     'content': \"\"\"You are helpful assistant that answers to everything bliefly with one sentence. \n     All of you responses are in latin.\"\"\"},\n    {'role': 'user', \n     'content': 'Why is the sky blue?'}]\n\nresponse = await llm_handler.prompt_chat(\n  # required\n  messages = messages,\n  # optinal, overwrites parameters passed to handlers\n  prompt_templates = alt_prompt_template\n)\n\nresponse['messages']\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n\n\n\n\n\n    [{'role': 'system',\n      'content': 'All of your answers if not in english, must contain tranlations.\\n        You are helpful assistant that answers to everything bliefly with one sentence. \\n     All of you responses are in latin.'},\n     {'role': 'user', 'content': 'Why is the sky blue?'},\n     {'role': 'assistant',\n      'content': 'My answer: \"Caelum caeruleum est, quia solis radii, qui per atmosphaeram transmitterentur, scatteringem lucis efficaciter faciunt.\"\\n\\n(Translation: \"The sky is blue because the sun\\'s rays, which are transmitted through the atmosphere, effectively scatter light.\")'}]\n\n\n\n### 5. Prompt call strategies\n\nSometimes making multiple calls for the same prompt can be useful. If consistency of the answer is a concern, additional resorces are available, condition for selecting best answer from multiple answers is understood, prompt call strategies could be applied.\n\nExample of strategies:\n\n- `most common output of 3` : calls 3 times, uses sim search to select most common.\n- `min output length` : return response with minimal output length (calls at least 2 times)\n- `max output length` : return response with maximal output length (calls at least 2 times)\n- `last output` : no matter how many calls, always selects last output\n\n\n```python\ndefault_prompt_strategy = {\n  'call_strategy_name' : \"min_output_length\",\n  'call_strategy_params' :{ 'n_calls' : 3}\n}\n\nmessages = [\n    {'role': 'system', \n     'content': \"\"\"You are helpful assistant that answers to everything bliefly with one sentence.\"\"\"},\n    {'role': 'user', \n     'content': 'Make a poem about clouds.'}]\n\nresponse = await llm_handler.prompt_chat(\n  # required\n  messages = messages,\n  # optinal, overwrites parameters passed to handlers\n  **default_prompt_strategy\n)\n\nprint(response['message']['content'])\n```\n\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n    HTTP Request: POST http://localhost:11434/api/chat \"HTTP/1.1 200 OK\"\n\n\n    Soft and fluffy, drifting by, clouds shape-shift in the sky.\n\n\n\n```python\nfor resp in llm_handler.call_strategy_h.last_responses:\n\n    print(resp['message']['content'])\n    print(\"------------------------\")\n```\n\n    Soft and fluffy, drifting by, clouds shape-shift in the sky.\n    ------------------------\n    Soft and white, they drift by day, whispers of the sky's gentle sway.\n    ------------------------\n    Across the sky, soft whispers play as wispy clouds drift by, shaping sunbeams into golden rays.\n    ------------------------\n\n\n### 6. Other methods\n\n#### Estimate tokens\n\n\n```python\nllm_handler.estimate_tokens(\n    text='Your first question was: \"Why is the sky blue?\"'\n    )\n```\n\n\n\n\n    12\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Simple wrapper around some Llm handlers.",
    "version": "0.0.2",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "399d28a55d3ab110e99453828d8d9a3d38e718f19b04ad9759e3c69403b83dca",
                "md5": "699ef982738edbc2083b4d7082225c5b",
                "sha256": "a4c2a6fcf15b3991ffa052d1abea377b864083b0201206c3548b3a803f9fc621"
            },
            "downloads": -1,
            "filename": "proompter-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "699ef982738edbc2083b4d7082225c5b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 12234,
            "upload_time": "2024-08-05T05:10:28",
            "upload_time_iso_8601": "2024-08-05T05:10:28.832677Z",
            "url": "https://files.pythonhosted.org/packages/39/9d/28a55d3ab110e99453828d8d9a3d38e718f19b04ad9759e3c69403b83dca/proompter-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "433d37af568ec503859c8d03f49f2910c64489ca801d0c101fe52d467da039b8",
                "md5": "cbed7ae613de0fd4754e7ba35cec319c",
                "sha256": "f76644223f6fb786d3b61488c81425a4b7fbc02a442a6489a4aa9013685f263a"
            },
            "downloads": -1,
            "filename": "proompter-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "cbed7ae613de0fd4754e7ba35cec319c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 16168,
            "upload_time": "2024-08-05T05:10:30",
            "upload_time_iso_8601": "2024-08-05T05:10:30.951746Z",
            "url": "https://files.pythonhosted.org/packages/43/3d/37af568ec503859c8d03f49f2910c64489ca801d0c101fe52d467da039b8/proompter-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-05 05:10:30",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "proompter"
}
        
Elapsed time: 0.50703s