goalchain


Namegoalchain JSON
Version 0.0.3 PyPI version JSON
download
home_pageNone
SummaryGoalChain is a simple but effective framework for enabling goal-orientated conversation flows for human-LLM and LLM-LLM interaction.
upload_time2024-05-25 08:36:25
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords agent chat conversation flow goal llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # GoalChain
GoalChain is a simple but effective framework for enabling goal-orientated conversation flows for human-LLM and LLM-LLM interaction.

## Installation

```console
pip install goalchain
```

## Getting started

Let's import the `Field`, `ValidationError`, `Goal` and `GoalChain` classes, which are the basis for the conversation flow.
```py
from goalchain import Field, ValidationError, Goal, GoalChain 
```

In this example we will create an AI assistant whose goal is to collect information from a customer about their desired product order. We define the information to be collected using `Field` objects within the `ProductOrderGoal`, which is a child of `Goal`: 
* the product name,
* the customer's email, and 
* quantity

We also define a validator for the quantity (after type casting to an int). `ValidationError` is used to pass error messages back to conversation. These messages should be human-readable.

`format_hint` is a natural language type hint for the LLM's JSON mode output.

```py
def quantity_validator(value):
    try:
        value = int(value)
    except (ValueError, TypeError):
        raise ValidationError("Quantity must be a valid number")
    if value <= 0:
        raise ValidationError("Quantity cannot be less than one")
    if value > 100:
        raise ValidationError("Quantity cannot be greater than 100")
    return value

class ProductOrderGoal(Goal):
    product_name = Field("product to be ordered", format_hint="a string")
    customer_email = Field("customer email", format_hint="a string")
    quantity = Field("quantity of product", format_hint="an integer", validator=quantity_validator)
```

In case the customer changes their mind, let's create another `Goal` child class called `OrderCancelGoal`.
We will request an optional reason for the customer's cancellation of the ongoing order. Through specifying that the field is "(optional)" in the description, the LLM will know it isn't necessary to achieve the goal. 

```py
class OrderCancelGoal(Goal):
    reason = Field("reason for order cancellation (optional)", format_hint="a string")
```

Note that the field object names, such as `product_name` are passed directly to the LLM prompt, and so they are part of the prompt-engineering task, as is every other string. 

Essentially the classes we defined are like forms to be filled out by the customer, but they lack instructions. Let's add those by instantiating the classes as objects. 

```py
product_order_goal = ProductOrderGoal(
    label="product_order",
    goal="to obtain information on an order to be made",
    opener="I see you are trying to order a product, how can I help you?",
    out_of_scope="Ask the user to contact sales team at sales@acme.com"
)

order_cancel_goal = OrderCancelGoal(
    label="cancel_current_order",
    goal="to obtain the reason for the cancellation",
    opener="I see you are trying to cancel the current order, how can I help you?",
    out_of_scope="Ask the user to contact the support team at support@acme.com",
    confirm=False
)
```

We define
* an internal label to be used (also part of our prompt-engineering task),
* the goal, expressed as a "to ..." statement,
* a default `opener` - something the AI assistant will use given no prior input,
* and importantly, instructions for the AI assistant as to what they should do in case of an out of scope user query

The `confirm` flag determines whether the AI assistant will ask for confirmation once it has all of the required information defined using the `Field` objects. It is `True` by default. We don't need a confirmation for the order cancellation goal, since it is in itself already a kind of confirmation.

Next we need to connect the goals together.

```py
product_order_goal.connect(goal=order_cancel_goal, 
                           user_goal="to cancel the current order", 
                           hand_over=True, 
                           keep_messages=True)

```

The `user_goal` is another "to ..." statement. Without `hand_over=True` the AI agent would reply with the canned `opener`. Setting it to `True` ensures the conversation flows smoothly. Sometimes you may want a canned response, other times not. 

`keep_messages=True` means the `order_cancel_goal` will receive the full history of the conversation with `product_order_goal`, otherwise it will be wiped. Again, sometimes a wipe of the conversation history may be desired, such as when simulating different AI personalities.

Let's also consider the possibility of a really undecisive customer. We should also give them the option to "cancel the cancellation". 

```py
order_cancel_goal.connect(goal=product_order_goal, 
                          user_goal="to continue with the order anyway", 
                          hand_over=True, 
                          keep_messages=True)
```

At some point you may have wondered if you can make a goal without any `Field` objects. You can! Such a goal is a routing goal defined only be the connections it has. This is useful for example in a voice-mail menu system. 

You may also be curious whether you can connect a goal to itself. You can! This is useful for example when using `confirm=False` with the `Goal`-inheriting object, where you require sequential user input of some variety. 

You can also chain connects, e.g. `goal.connect(...).connect(...).connect(...)`

Finally, let's use `GoalChain` to set the initial goal and test our AI sales assistant!

```py
goal_chain = GoalChain(product_order_goal)
```

Note that each goal can use a separate LLM API as enabled by [LiteLLM](https://github.com/BerriAI/litellm), and if you have the required environment variables set, you can use any model from the supported [model providers](https://docs.litellm.ai/docs/providers).

The default model is `"gpt-4-1106-preview"`, that is:

```py
product_order_goal = ProductOrderGoal(...
    model="gpt-4-1106-preview", 
    json_model="gpt-4-1106-preview"
)
```

You can also pass LiteLLM [common parameters](https://litellm.vercel.app/docs/completion/input) using `params`, for example:

```py
product_order_goal = ProductOrderGoal(...
    model="gpt-4-1106-preview", 
    json_model="gpt-4-1106-preview",
    params={"temperature": 1.5, "max_tokens": 10}
)
```

You can also use `params` to call local models [using VLLM](https://docs.litellm.ai/docs/providers/vllm).

When using the default `"gpt-4-1106-preview"` model, remember to set the `OPENAI_API_KEY` environment variable.

```py
import os
os.environ["OPENAI_API_KEY"] = "sk-ABC..."
```

Note: The code so far is available as a [gist](https://gist.github.com/adlumal/d5d1138b57011b0b61a20e83b7484377). Paste it into a Jupyter notebook, preceded by `!pip install goalchain` to get started with the live example below.

Usually it is the user who prompts the AI agent first, but if this is not the case, we call `get_response` without any arguments, or use `None` as the argument:

```py
goal_chain.get_response()
```

```txt
{'type': 'message',
 'content': 'Great choice! Could you please provide me with your email address to proceed with the order?',
 'goal': <__main__.ProductOrderGoal at 0x7f8c8b687110>}
```

GoalChain returns a `dict` containing the type of response (either `message` or `data`), the content of the response (right now just our canned response) and the current `Goal`-inheriting object.

Let's query our AI assistant with a potential purchase.

```py
goal_chain.get_response("Hi, I'd like to buy a vacuum cleaner")
```

```txt
{'type': 'message',
 'content': 'Great! Could you please provide your email address so we can send the confirmation of your order?',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```

The AI assistant is working towards achieving its current goal, and gathering the required information for an order. 

```py
goal_chain.get_response("Sure, it is john@smith.com")
```

```txt
{'type': 'message',
 'content': 'Thank you, John. Which model of vacuum cleaner would you like to order?',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```

```py
goal_chain.get_response("The 2000XL model")
```

```txt
{'type': 'message',
 'content': 'How many of the 2000XL model would you like to order?',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
 
```

Let's test whether our AI assistant can handle a cancellation of the current order.

```py
goal_chain.get_response("Actually I changed my mind, cancel this order")
```

```txt
{'type': 'message',
 'content': 'Of course, I can assist with that. Could you please tell me the reason for the cancellation?',
 'goal': <__main__.OrderCancelGoal at 0x7ff0fb275650>}
 
```

It worked. Note that the returned goal is now of type `OrderCancelGoal`. We've switched goals. Let's also test whether we can switch back. 

```py
goal_chain.get_response("Actually, yeah, I would like to buy the vacuum cleaner")
```

```txt
{'type': 'message',
 'content': 'Understood. How many of the 2000XL model would you like to order?',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```

We're back to the `ProductOrderGoal`.

```py
goal_chain.get_response("1 please")
```

```txt
{'type': 'message',
 'content': 'To confirm, you would like to order one 2000XL vacuum cleaner and the order will be sent to john@smith.com, is that correct?',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```

The AI assistant confirms our order. If we didn't like this behaviour we would use `confirm=False`.

Let's see how the assistant responds to an out of scope query.

```py
goal_chain.get_response("Is it a good vacuum cleaner? What do you think?")
```

```txt
{'type': 'message',
 'content': "For product reviews and additional information, I recommend contacting our sales team at sales@acme.com. They can help with your inquiries. Meanwhile, can you please confirm if you'd like to proceed with the order for one 2000XL vacuum cleaner to john@smith.com?",
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```

The AI assistant redirects us to the sales team inbox as defined earlier, and re-iterates the confirmation.

But let's throw a curve-ball...
 
```py
goal_chain.get_response("Ok, I'd actually like to make that an order of 500")
```

```txt
{'type': 'message',
 'content': "Just to clarify, you'd like to order 500 units of the 2000XL vacuum cleaner, with the order confirmation sent to john@smith.com. Is that correct?",
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```
 
```py
goal_chain.get_response("Yes")
```

```txt
{'type': 'message',
 'content': 'I’m sorry, but I need to inform you that the quantity cannot be greater than 100 for an order. If you would like to proceed with an order within this limit, please let me know.',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```

The validator we use has given enough information to the AI assistant to justify why it cannot process this quantity via the `ValidationError` message.

Note that GoalChain only validates inputs once the `Goal` has been completed for token efficiency and performance reasons. If you'd like to validate inputs as you go, you have two options:

1. Use a `Goal` with only one `Field`, and `confirm=False`. Chain these goals instead of using multiple fields in a single `Goal`.

1. Use a soft-prompt, e.g. `quantity = Field("quantity of product (no more than 100)", format_hint="an integer")`. This approach is not foolproof, so it is still recommended to use a validator. The user will receive immediate feedback, however.

Let's complete the order.

```py
goal_chain.get_response("Alright, I'll guess I'll just go with 1")
```

```txt
{'type': 'message',
 'content': 'To confirm, you would like to order one 2000XL vacuum cleaner and the order will be sent to john@smith.com, is that correct?',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
``` 

```py
goal_chain.get_response("That's right")
```

```txt
{'type': 'data',
 'content': {'customer_email': 'john@smith.com',
  'product_name': '2000XL',
  'quantity': 1},
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
``` 

The content returned is a dictionary parsed from the output of the LLM's JSON mode. The keys are our field instance names. We can now use the data to perform some kind of action, such as processing the order of our hypothetical 2000XL vacuum cleaner. 

Note that in reality, if you were building such a system, you would need to make a dedicated product-lookup goal as not to allow arbitrary or meaningless product names. 

Let's send our confirmation the order has been processed via `simulate_response`. We will also use  `rephrase = True` to rephrase the output, which will appear more natural in case the customer frequently interacts with the goal. 
 
```py
goal_chain.simulate_response(f"Thank you for ordering from Acme. Your order will be dispatched in the next 1-3 business days.", rephrase = True)
```

```txt
{'type': 'message',
 'content': 'We appreciate your purchase with Acme! Rest assured, your order will be on its way within the next 1 to 3 business days.',
 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}
```

At this point we may end the session or connect back to a menu or routing goal for further input. 

If you would like to customise or contribute to GoalChain, or report any issues, visit the [GitHub page](https://github.com/adlumal/GoalChain).  


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "goalchain",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "agent, chat, conversation, flow, goal, llm",
    "author": null,
    "author_email": "Adrian Lucas Malec <dr.adrian@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/72/d3/a0af8a7503a95f147787b2d410c1dd7ca1b679e8c1944d41878f330cffeb/goalchain-0.0.3.tar.gz",
    "platform": null,
    "description": "# GoalChain\nGoalChain is a simple but effective framework for enabling goal-orientated conversation flows for human-LLM and LLM-LLM interaction.\n\n## Installation\n\n```console\npip install goalchain\n```\n\n## Getting started\n\nLet's import the `Field`, `ValidationError`, `Goal` and `GoalChain` classes, which are the basis for the conversation flow.\n```py\nfrom goalchain import Field, ValidationError, Goal, GoalChain \n```\n\nIn this example we will create an AI assistant whose goal is to collect information from a customer about their desired product order. We define the information to be collected using `Field` objects within the `ProductOrderGoal`, which is a child of `Goal`: \n* the product name,\n* the customer's email, and \n* quantity\n\nWe also define a validator for the quantity (after type casting to an int). `ValidationError` is used to pass error messages back to conversation. These messages should be human-readable.\n\n`format_hint` is a natural language type hint for the LLM's JSON mode output.\n\n```py\ndef quantity_validator(value):\n    try:\n        value = int(value)\n    except (ValueError, TypeError):\n        raise ValidationError(\"Quantity must be a valid number\")\n    if value <= 0:\n        raise ValidationError(\"Quantity cannot be less than one\")\n    if value > 100:\n        raise ValidationError(\"Quantity cannot be greater than 100\")\n    return value\n\nclass ProductOrderGoal(Goal):\n    product_name = Field(\"product to be ordered\", format_hint=\"a string\")\n    customer_email = Field(\"customer email\", format_hint=\"a string\")\n    quantity = Field(\"quantity of product\", format_hint=\"an integer\", validator=quantity_validator)\n```\n\nIn case the customer changes their mind, let's create another `Goal` child class called `OrderCancelGoal`.\nWe will request an optional reason for the customer's cancellation of the ongoing order. Through specifying that the field is \"(optional)\" in the description, the LLM will know it isn't necessary to achieve the goal. \n\n```py\nclass OrderCancelGoal(Goal):\n    reason = Field(\"reason for order cancellation (optional)\", format_hint=\"a string\")\n```\n\nNote that the field object names, such as `product_name` are passed directly to the LLM prompt, and so they are part of the prompt-engineering task, as is every other string. \n\nEssentially the classes we defined are like forms to be filled out by the customer, but they lack instructions. Let's add those by instantiating the classes as objects. \n\n```py\nproduct_order_goal = ProductOrderGoal(\n    label=\"product_order\",\n    goal=\"to obtain information on an order to be made\",\n    opener=\"I see you are trying to order a product, how can I help you?\",\n    out_of_scope=\"Ask the user to contact sales team at sales@acme.com\"\n)\n\norder_cancel_goal = OrderCancelGoal(\n    label=\"cancel_current_order\",\n    goal=\"to obtain the reason for the cancellation\",\n    opener=\"I see you are trying to cancel the current order, how can I help you?\",\n    out_of_scope=\"Ask the user to contact the support team at support@acme.com\",\n    confirm=False\n)\n```\n\nWe define\n* an internal label to be used (also part of our prompt-engineering task),\n* the goal, expressed as a \"to ...\" statement,\n* a default `opener` - something the AI assistant will use given no prior input,\n* and importantly, instructions for the AI assistant as to what they should do in case of an out of scope user query\n\nThe `confirm` flag determines whether the AI assistant will ask for confirmation once it has all of the required information defined using the `Field` objects. It is `True` by default. We don't need a confirmation for the order cancellation goal, since it is in itself already a kind of confirmation.\n\nNext we need to connect the goals together.\n\n```py\nproduct_order_goal.connect(goal=order_cancel_goal, \n                           user_goal=\"to cancel the current order\", \n                           hand_over=True, \n                           keep_messages=True)\n\n```\n\nThe `user_goal` is another \"to ...\" statement. Without `hand_over=True` the AI agent would reply with the canned `opener`. Setting it to `True` ensures the conversation flows smoothly. Sometimes you may want a canned response, other times not. \n\n`keep_messages=True` means the `order_cancel_goal` will receive the full history of the conversation with `product_order_goal`, otherwise it will be wiped. Again, sometimes a wipe of the conversation history may be desired, such as when simulating different AI personalities.\n\nLet's also consider the possibility of a really undecisive customer. We should also give them the option to \"cancel the cancellation\". \n\n```py\norder_cancel_goal.connect(goal=product_order_goal, \n                          user_goal=\"to continue with the order anyway\", \n                          hand_over=True, \n                          keep_messages=True)\n```\n\nAt some point you may have wondered if you can make a goal without any `Field` objects. You can! Such a goal is a routing goal defined only be the connections it has. This is useful for example in a voice-mail menu system. \n\nYou may also be curious whether you can connect a goal to itself. You can! This is useful for example when using `confirm=False` with the `Goal`-inheriting object, where you require sequential user input of some variety. \n\nYou can also chain connects, e.g. `goal.connect(...).connect(...).connect(...)`\n\nFinally, let's use `GoalChain` to set the initial goal and test our AI sales assistant!\n\n```py\ngoal_chain = GoalChain(product_order_goal)\n```\n\nNote that each goal can use a separate LLM API as enabled by [LiteLLM](https://github.com/BerriAI/litellm), and if you have the required environment variables set, you can use any model from the supported [model providers](https://docs.litellm.ai/docs/providers).\n\nThe default model is `\"gpt-4-1106-preview\"`, that is:\n\n```py\nproduct_order_goal = ProductOrderGoal(...\n    model=\"gpt-4-1106-preview\", \n    json_model=\"gpt-4-1106-preview\"\n)\n```\n\nYou can also pass LiteLLM [common parameters](https://litellm.vercel.app/docs/completion/input) using `params`, for example:\n\n```py\nproduct_order_goal = ProductOrderGoal(...\n    model=\"gpt-4-1106-preview\", \n    json_model=\"gpt-4-1106-preview\",\n    params={\"temperature\": 1.5, \"max_tokens\": 10}\n)\n```\n\nYou can also use `params` to call local models [using VLLM](https://docs.litellm.ai/docs/providers/vllm).\n\nWhen using the default `\"gpt-4-1106-preview\"` model, remember to set the `OPENAI_API_KEY` environment variable.\n\n```py\nimport os\nos.environ[\"OPENAI_API_KEY\"] = \"sk-ABC...\"\n```\n\nNote: The code so far is available as a [gist](https://gist.github.com/adlumal/d5d1138b57011b0b61a20e83b7484377). Paste it into a Jupyter notebook, preceded by `!pip install goalchain` to get started with the live example below.\n\nUsually it is the user who prompts the AI agent first, but if this is not the case, we call `get_response` without any arguments, or use `None` as the argument:\n\n```py\ngoal_chain.get_response()\n```\n\n```txt\n{'type': 'message',\n 'content': 'Great choice! Could you please provide me with your email address to proceed with the order?',\n 'goal': <__main__.ProductOrderGoal at 0x7f8c8b687110>}\n```\n\nGoalChain returns a `dict` containing the type of response (either `message` or `data`), the content of the response (right now just our canned response) and the current `Goal`-inheriting object.\n\nLet's query our AI assistant with a potential purchase.\n\n```py\ngoal_chain.get_response(\"Hi, I'd like to buy a vacuum cleaner\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'Great! Could you please provide your email address so we can send the confirmation of your order?',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n\nThe AI assistant is working towards achieving its current goal, and gathering the required information for an order. \n\n```py\ngoal_chain.get_response(\"Sure, it is john@smith.com\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'Thank you, John. Which model of vacuum cleaner would you like to order?',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n\n```py\ngoal_chain.get_response(\"The 2000XL model\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'How many of the 2000XL model would you like to order?',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n \n```\n\nLet's test whether our AI assistant can handle a cancellation of the current order.\n\n```py\ngoal_chain.get_response(\"Actually I changed my mind, cancel this order\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'Of course, I can assist with that. Could you please tell me the reason for the cancellation?',\n 'goal': <__main__.OrderCancelGoal at 0x7ff0fb275650>}\n \n```\n\nIt worked. Note that the returned goal is now of type `OrderCancelGoal`. We've switched goals. Let's also test whether we can switch back. \n\n```py\ngoal_chain.get_response(\"Actually, yeah, I would like to buy the vacuum cleaner\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'Understood. How many of the 2000XL model would you like to order?',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n\nWe're back to the `ProductOrderGoal`.\n\n```py\ngoal_chain.get_response(\"1 please\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'To confirm, you would like to order one 2000XL vacuum cleaner and the order will be sent to john@smith.com, is that correct?',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n\nThe AI assistant confirms our order. If we didn't like this behaviour we would use `confirm=False`.\n\nLet's see how the assistant responds to an out of scope query.\n\n```py\ngoal_chain.get_response(\"Is it a good vacuum cleaner? What do you think?\")\n```\n\n```txt\n{'type': 'message',\n 'content': \"For product reviews and additional information, I recommend contacting our sales team at sales@acme.com. They can help with your inquiries. Meanwhile, can you please confirm if you'd like to proceed with the order for one 2000XL vacuum cleaner to john@smith.com?\",\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n\nThe AI assistant redirects us to the sales team inbox as defined earlier, and re-iterates the confirmation.\n\nBut let's throw a curve-ball...\n \n```py\ngoal_chain.get_response(\"Ok, I'd actually like to make that an order of 500\")\n```\n\n```txt\n{'type': 'message',\n 'content': \"Just to clarify, you'd like to order 500 units of the 2000XL vacuum cleaner, with the order confirmation sent to john@smith.com. Is that correct?\",\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n \n```py\ngoal_chain.get_response(\"Yes\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'I\u2019m sorry, but I need to inform you that the quantity cannot be greater than 100 for an order. If you would like to proceed with an order within this limit, please let me know.',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n\nThe validator we use has given enough information to the AI assistant to justify why it cannot process this quantity via the `ValidationError` message.\n\nNote that GoalChain only validates inputs once the `Goal` has been completed for token efficiency and performance reasons. If you'd like to validate inputs as you go, you have two options:\n\n1. Use a `Goal` with only one `Field`, and `confirm=False`. Chain these goals instead of using multiple fields in a single `Goal`.\n\n1. Use a soft-prompt, e.g. `quantity = Field(\"quantity of product (no more than 100)\", format_hint=\"an integer\")`. This approach is not foolproof, so it is still recommended to use a validator. The user will receive immediate feedback, however.\n\nLet's complete the order.\n\n```py\ngoal_chain.get_response(\"Alright, I'll guess I'll just go with 1\")\n```\n\n```txt\n{'type': 'message',\n 'content': 'To confirm, you would like to order one 2000XL vacuum cleaner and the order will be sent to john@smith.com, is that correct?',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n``` \n\n```py\ngoal_chain.get_response(\"That's right\")\n```\n\n```txt\n{'type': 'data',\n 'content': {'customer_email': 'john@smith.com',\n  'product_name': '2000XL',\n  'quantity': 1},\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n``` \n\nThe content returned is a dictionary parsed from the output of the LLM's JSON mode. The keys are our field instance names. We can now use the data to perform some kind of action, such as processing the order of our hypothetical 2000XL vacuum cleaner. \n\nNote that in reality, if you were building such a system, you would need to make a dedicated product-lookup goal as not to allow arbitrary or meaningless product names. \n\nLet's send our confirmation the order has been processed via `simulate_response`. We will also use  `rephrase = True` to rephrase the output, which will appear more natural in case the customer frequently interacts with the goal. \n \n```py\ngoal_chain.simulate_response(f\"Thank you for ordering from Acme. Your order will be dispatched in the next 1-3 business days.\", rephrase = True)\n```\n\n```txt\n{'type': 'message',\n 'content': 'We appreciate your purchase with Acme! Rest assured, your order will be on its way within the next 1 to 3 business days.',\n 'goal': <__main__.ProductOrderGoal at 0x7ff0fb283090>}\n```\n\nAt this point we may end the session or connect back to a menu or routing goal for further input. \n\nIf you would like to customise or contribute to GoalChain, or report any issues, visit the [GitHub page](https://github.com/adlumal/GoalChain).  \n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "GoalChain is a simple but effective framework for enabling goal-orientated conversation flows for human-LLM and LLM-LLM interaction.",
    "version": "0.0.3",
    "project_urls": {
        "Documentation": "https://github.com/adlumal/GoalChain/blob/main/README.md",
        "Homepage": "https://github.com/adlumal/GoalChain",
        "Issues": "https://github.com/adlumal/GoalChain/issues",
        "Source": "https://github.com/adlumal/GoalChain"
    },
    "split_keywords": [
        "agent",
        " chat",
        " conversation",
        " flow",
        " goal",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0778d805826e56fae33a0af2a3a6ff8c58bea078072e61e76f5ef08feb5f3a8b",
                "md5": "c231607813da36007f5571852c201a85",
                "sha256": "88b52ead84fc54936c1e248f28b093a68d47f92c8ac48ef8b41f85554436331f"
            },
            "downloads": -1,
            "filename": "goalchain-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c231607813da36007f5571852c201a85",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 9856,
            "upload_time": "2024-05-25T08:36:24",
            "upload_time_iso_8601": "2024-05-25T08:36:24.039481Z",
            "url": "https://files.pythonhosted.org/packages/07/78/d805826e56fae33a0af2a3a6ff8c58bea078072e61e76f5ef08feb5f3a8b/goalchain-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "72d3a0af8a7503a95f147787b2d410c1dd7ca1b679e8c1944d41878f330cffeb",
                "md5": "44f67f65a6d89eee7ecd3d3e5e59bc9a",
                "sha256": "f95ac8e00f11da47610619df7362bcf856290ed2cafa5a1c43e891a234adfc44"
            },
            "downloads": -1,
            "filename": "goalchain-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "44f67f65a6d89eee7ecd3d3e5e59bc9a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 10419,
            "upload_time": "2024-05-25T08:36:25",
            "upload_time_iso_8601": "2024-05-25T08:36:25.751462Z",
            "url": "https://files.pythonhosted.org/packages/72/d3/a0af8a7503a95f147787b2d410c1dd7ca1b679e8c1944d41878f330cffeb/goalchain-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-25 08:36:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "adlumal",
    "github_project": "GoalChain",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "goalchain"
}
        
Elapsed time: 0.23882s