Name | dandy JSON |
Version |
0.9.2
JSON |
| download |
home_page | None |
Summary | Intelligence Bot Framework |
upload_time | 2025-02-10 18:39:33 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | Copyright (c) 2024 Stratus Advanced Technologies and Contributors.
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE. |
keywords |
dandy
ai
llm
agent
prompt
gpt
bot
workflow
automation
artificial intelligence
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Dandy
Dandy is an intelligence framework for developing programmatic intelligent bots and workflows.
It's opinionated, simple and designed to be incredibly pythonic putting the project and developers first.
### Why Did We Create Another AI Framework?
Artificial intelligence programming is a very different experience than conventional programming as it's very probabilistic.
Based on our experience most of the existing frameworks / libraries are designed to focus more on deterministic outcomes which is not realistic or in our opinion beneficial.
We created Dandy to focus on the flow and validation of data with your artificial intelligence systems to allow you to embrace the probabilistic nature of artificial intelligence.
Our approach is to focus on batteries included with strong tooling to help build great interactions and lowers the barrier to entry for developers.
### Pydantic is Everyones Friend
We have a class called "Intel" that is the pydantic "BaseModel" class renamed to give more separation of concerns between dandy code and your code.
This project critically relies on the use of pydantic to handle the flow and validation of data with your artificial intelligence systems.
Make sure you have a good foundation on the use of pydantic before continuing.
Please visit https://docs.pydantic.dev/latest/ for more information on pydantic and how to utilize it.
For bigger examples please check out the [example](https://github.com/stratusadv/dandy/tree/main/example) directory in this repository.
### Installation
``` bash
pip install dandy
```
### Recommended Project Structure
```
cookie_recipe/ <-- This would be for each of your modules
__init__.py
your_code.py
...
...
intelligence/ <-- Dandy related code should be in this directory
__init__.py
bots/
__init__.py
cookie_recipe_llm_bot.py <-- Should contain one bot alone (can include, intels and prompts specific to this bot)
cookie_recipe_safety_llm_bot.py
cookie_recipe_review_llm_bot.py
...
...
intel/
__init__.py
cookie_recipe_intel.py <-- Intel Classes in all of these files must be postfixed with "Intel" ex: "SelectIntel"
cookie_recipe_story_intel.py
cookie_recipe_marketing_intel.py
...
...
prompts/
__init__.py
cookie_recipe_prompts.py <-- All of these files would contain prompts that would be shared across the project
cookie_recipe_email_prompts.py
cookie_recipe_instructions_prompts.py
...
...
workflows/
__init__.py
cookie_recipe_generation_workflow.py <-- In most cases this workflow would be used to interact with the user
...
...
dandy_settings.py <-- Contains Settings, LLM configs for the entire project
```
### Setting Up Dandy
It's recommended that you have a "DANDY_SETTINGS_MODULE" in your environment variables pointed towards your dandy settings.
If no environment variable is set, it will look for "dandy_settings" module in the current working directory or sys.path.
```python
# dandy_settings.py
import os
from pathlib import Path
# This is used for controlling the debug recorder in development and should be set to false in production
ALLOW_DEBUG_RECORDING: bool = True
# You should set this to the root directory of your project the default will be the current working directory
BASE_PATH = Path.resolve(Path(__file__)).parent
# Other DEFAULT Settings - See dandy/settings.py for all options
DEFAULT_LLM_TEMPERATURE: float = 0.7
DEFAULT_LLM_SEED: int = 77
DEFAULT_LLM_RANDOMIZE_SEED: bool = False
DEFAULT_LLM_MAX_INPUT_TOKENS: int = 8000
DEFAULT_LLM_MAX_OUTPUT_TOKENS: int = 4000
DEFAULT_LLM_CONNECTION_RETRY_COUNT: int = 10
DEFAULT_LLM_PROMPT_RETRY_COUNT: int = 2
# These are some example LLM configs you may only need one of these, you must have a "DEFAULT" LLM config
LLM_CONFIGS = {
'DEFAULT': {
'TYPE': 'ollama',
'HOST': os.getenv("OLLAMA_HOST"),
'PORT': int(os.getenv("OLLAMA_PORT", 11434)),
'API_KEY': os.getenv("OLLAMA_API_KEY"),
'MODEL': 'llama3.1:8b-instruct-q4_K_M',
},
'OLLAMA_LLAMA_3_2_3B_SMALL': {
'TYPE': 'ollama',
'HOST': os.getenv("OLLAMA_HOST"),
'PORT': int(os.getenv("OLLAMA_PORT", 11434)),
'API_KEY': os.getenv("OLLAMA_API_KEY"),
'MODEL': 'llama3.2:3b-instruct-q4_K_M',
# You can override any of the default settings for each LLM config
'TEMPERATURE': 0.2,
'SEED': 65,
'RANDOMIZE_SEED': False,
'MAX_INPUT_TOKENS': 500,
'MAX_OUTPUT_TOKENS': 200,
},
'OPENAI_GPT_3_5_TURBO': {
'TYPE': 'openai',
'HOST': os.getenv("OPENAI_HOST"),
'PORT': int(os.getenv("OPENAI_PORT", 443)),
'API_KEY': os.getenv("OPEN_API_KEY"),
'MODEL': 'gpt-3.5-turbo',
},
}
```
### Basic Usage Example
```python
# cookie_recipe_llm_bot.py
from typing_extensions import List
from dandy.intel import BaseIntel
from dandy.llm import Prompt, BaseLlmBot, LlmConfigOptions
class CookieRecipeIngredientIntel(BaseIntel):
name: str
unit_type: str
quantity: float
class CookieRecipeIntel(BaseIntel):
name: str
description: str
ingredients: List[CookieRecipeIngredientIntel]
instructions: str
class CookieRecipeLlmBot(BaseLlmBot):
# If you do not set a config, the "DEFAULT" config from your "dandy_settings.py" will be used
config = 'OPENAI_GPT_3_5_TURBO'
# You can also override llm config options per bot.
config_options = LlmConfigOptions(
temperature=0.7,
seed=77,
randomize_seed=True,
max_input_tokens=8000,
max_output_tokens=4000,
)
# This is the instructions used by the system message when the llm is prompted
instructions_prompt = (
Prompt()
.title('You are a cookie recipe bot.')
.text('Your job is to follow the instructions provided below.')
.unordered_random_list([
'Create a cookie based on the users input',
'Make sure the instructions are easy to follow',
'Names of recipe should be as short as possible',
])
)
# the process function is required for all dandy handlers (bots and workflows) for debug and exception handling
@classmethod
def process(cls, prompt: Prompt) -> CookieRecipeIntel:
return cls.process_prompt_to_intel(
prompt=prompt,
intel_class=CookieRecipeIntel,
)
cookie_recipe_intel = CookieRecipeLlmBot.process(
prompt=Prompt().text('I love broccoli and oatmeal!'),
)
print(cookie_recipe_intel.model_dump_json(indent=4))
```
```json
// Output (We cannot validate the quality of this recipe, you're more then welcome to try it!)
{
"name": "Broccoli Oatmeal Cookies",
"description": "A delicious cookie recipe featuring the flavors of broccoli and oatmeal.",
"ingredients": [
{
"name": "All-purpose flour",
"unit_type": "cups",
"quantity": 2.5
},
{
"name": "Rolled oats",
"unit_type": "cups",
"quantity": 1.0
},
{
"name": "Brown sugar",
"unit_type": "cups",
"quantity": 0.5
},
{
"name": "Granulated sugar",
"unit_type": "cups",
"quantity": 0.25
},
{
"name": "Large eggs",
"unit_type": "pieces",
"quantity": 1.0
},
{
"name": "Melted butter",
"unit_type": "tablespoons",
"quantity": 1.0
},
{
"name": "Steamed broccoli florets",
"unit_type": "cups",
"quantity": 0.5
},
{
"name": "Vanilla extract",
"unit_type": "teaspoons",
"quantity": 1.0
}
],
"instructions": "Preheat oven to 375°F (190°C). Line a baking sheet with parchment paper. In a medium bowl, whisk together flour, oats, brown sugar, and granulated sugar. In a large bowl, whisk together eggs, melted butter, steamed broccoli florets, and vanilla extract. Add the dry ingredients to the wet ingredients and stir until combined. Scoop tablespoon-sized balls of dough onto the prepared baking sheet, leaving 2 inches of space between each cookie. Bake for 10-12 minutes or until lightly golden brown."
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "dandy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "dandy, ai, llm, agent, prompt, gpt, bot, workflow, automation, artificial intelligence",
"author": null,
"author_email": "Nathan Johnson <nathanj@stratusadv.com>",
"download_url": "https://files.pythonhosted.org/packages/ad/e9/5704737650b34dbaea6bdc5c818d774292efa57b58c04425a604d8e0888c/dandy-0.9.2.tar.gz",
"platform": null,
"description": "# Dandy\n\nDandy is an intelligence framework for developing programmatic intelligent bots and workflows. \nIt's opinionated, simple and designed to be incredibly pythonic putting the project and developers first.\n\n### Why Did We Create Another AI Framework?\n\nArtificial intelligence programming is a very different experience than conventional programming as it's very probabilistic.\nBased on our experience most of the existing frameworks / libraries are designed to focus more on deterministic outcomes which is not realistic or in our opinion beneficial. \n\nWe created Dandy to focus on the flow and validation of data with your artificial intelligence systems to allow you to embrace the probabilistic nature of artificial intelligence.\nOur approach is to focus on batteries included with strong tooling to help build great interactions and lowers the barrier to entry for developers.\n\n### Pydantic is Everyones Friend\n\nWe have a class called \"Intel\" that is the pydantic \"BaseModel\" class renamed to give more separation of concerns between dandy code and your code.\n\nThis project critically relies on the use of pydantic to handle the flow and validation of data with your artificial intelligence systems. \nMake sure you have a good foundation on the use of pydantic before continuing.\n\nPlease visit https://docs.pydantic.dev/latest/ for more information on pydantic and how to utilize it.\n\nFor bigger examples please check out the [example](https://github.com/stratusadv/dandy/tree/main/example) directory in this repository.\n\n### Installation\n\n``` bash\npip install dandy\n```\n\n### Recommended Project Structure\n\n```\ncookie_recipe/ <-- This would be for each of your modules\n __init__.py\n your_code.py\n ...\n ...\n intelligence/ <-- Dandy related code should be in this directory\n __init__.py\n bots/\n __init__.py\n cookie_recipe_llm_bot.py <-- Should contain one bot alone (can include, intels and prompts specific to this bot)\n cookie_recipe_safety_llm_bot.py\n cookie_recipe_review_llm_bot.py\n ...\n ...\n intel/\n __init__.py\n cookie_recipe_intel.py <-- Intel Classes in all of these files must be postfixed with \"Intel\" ex: \"SelectIntel\"\n cookie_recipe_story_intel.py\n cookie_recipe_marketing_intel.py\n ...\n ...\n prompts/\n __init__.py\n cookie_recipe_prompts.py <-- All of these files would contain prompts that would be shared across the project\n cookie_recipe_email_prompts.py\n cookie_recipe_instructions_prompts.py\n ...\n ... \n workflows/\n __init__.py\n cookie_recipe_generation_workflow.py <-- In most cases this workflow would be used to interact with the user\n ...\n ...\n\ndandy_settings.py <-- Contains Settings, LLM configs for the entire project\n```\n\n### Setting Up Dandy\n\nIt's recommended that you have a \"DANDY_SETTINGS_MODULE\" in your environment variables pointed towards your dandy settings.\nIf no environment variable is set, it will look for \"dandy_settings\" module in the current working directory or sys.path.\n\n```python\n# dandy_settings.py\n\nimport os\nfrom pathlib import Path\n\n# This is used for controlling the debug recorder in development and should be set to false in production\n\nALLOW_DEBUG_RECORDING: bool = True\n\n# You should set this to the root directory of your project the default will be the current working directory\n\nBASE_PATH = Path.resolve(Path(__file__)).parent\n\n# Other DEFAULT Settings - See dandy/settings.py for all options\n\nDEFAULT_LLM_TEMPERATURE: float = 0.7\nDEFAULT_LLM_SEED: int = 77\nDEFAULT_LLM_RANDOMIZE_SEED: bool = False\nDEFAULT_LLM_MAX_INPUT_TOKENS: int = 8000\nDEFAULT_LLM_MAX_OUTPUT_TOKENS: int = 4000\nDEFAULT_LLM_CONNECTION_RETRY_COUNT: int = 10\nDEFAULT_LLM_PROMPT_RETRY_COUNT: int = 2\n\n# These are some example LLM configs you may only need one of these, you must have a \"DEFAULT\" LLM config\n\nLLM_CONFIGS = {\n 'DEFAULT': {\n 'TYPE': 'ollama',\n 'HOST': os.getenv(\"OLLAMA_HOST\"),\n 'PORT': int(os.getenv(\"OLLAMA_PORT\", 11434)),\n 'API_KEY': os.getenv(\"OLLAMA_API_KEY\"),\n 'MODEL': 'llama3.1:8b-instruct-q4_K_M',\n },\n 'OLLAMA_LLAMA_3_2_3B_SMALL': {\n 'TYPE': 'ollama',\n 'HOST': os.getenv(\"OLLAMA_HOST\"),\n 'PORT': int(os.getenv(\"OLLAMA_PORT\", 11434)),\n 'API_KEY': os.getenv(\"OLLAMA_API_KEY\"),\n 'MODEL': 'llama3.2:3b-instruct-q4_K_M',\n \n # You can override any of the default settings for each LLM config\n \n 'TEMPERATURE': 0.2,\n 'SEED': 65,\n 'RANDOMIZE_SEED': False,\n 'MAX_INPUT_TOKENS': 500,\n 'MAX_OUTPUT_TOKENS': 200,\n },\n 'OPENAI_GPT_3_5_TURBO': {\n 'TYPE': 'openai',\n 'HOST': os.getenv(\"OPENAI_HOST\"),\n 'PORT': int(os.getenv(\"OPENAI_PORT\", 443)),\n 'API_KEY': os.getenv(\"OPEN_API_KEY\"),\n 'MODEL': 'gpt-3.5-turbo',\n },\n}\n\n```\n\n### Basic Usage Example\n\n```python\n# cookie_recipe_llm_bot.py\n\nfrom typing_extensions import List\n\nfrom dandy.intel import BaseIntel\nfrom dandy.llm import Prompt, BaseLlmBot, LlmConfigOptions\n\n\nclass CookieRecipeIngredientIntel(BaseIntel):\n name: str\n unit_type: str\n quantity: float\n\n\nclass CookieRecipeIntel(BaseIntel):\n name: str\n description: str\n ingredients: List[CookieRecipeIngredientIntel]\n instructions: str\n\n\nclass CookieRecipeLlmBot(BaseLlmBot):\n # If you do not set a config, the \"DEFAULT\" config from your \"dandy_settings.py\" will be used\n\n config = 'OPENAI_GPT_3_5_TURBO'\n\n # You can also override llm config options per bot.\n\n config_options = LlmConfigOptions(\n temperature=0.7,\n seed=77,\n randomize_seed=True,\n max_input_tokens=8000,\n max_output_tokens=4000,\n )\n\n # This is the instructions used by the system message when the llm is prompted\n\n instructions_prompt = (\n Prompt()\n .title('You are a cookie recipe bot.')\n .text('Your job is to follow the instructions provided below.')\n .unordered_random_list([\n 'Create a cookie based on the users input',\n 'Make sure the instructions are easy to follow',\n 'Names of recipe should be as short as possible',\n ])\n )\n\n # the process function is required for all dandy handlers (bots and workflows) for debug and exception handling\n\n @classmethod\n def process(cls, prompt: Prompt) -> CookieRecipeIntel:\n return cls.process_prompt_to_intel(\n prompt=prompt,\n intel_class=CookieRecipeIntel,\n )\n\n\ncookie_recipe_intel = CookieRecipeLlmBot.process(\n prompt=Prompt().text('I love broccoli and oatmeal!'),\n)\n\nprint(cookie_recipe_intel.model_dump_json(indent=4))\n```\n\n```json\n// Output (We cannot validate the quality of this recipe, you're more then welcome to try it!)\n\n{\n \"name\": \"Broccoli Oatmeal Cookies\",\n \"description\": \"A delicious cookie recipe featuring the flavors of broccoli and oatmeal.\",\n \"ingredients\": [\n {\n \"name\": \"All-purpose flour\",\n \"unit_type\": \"cups\",\n \"quantity\": 2.5\n },\n {\n \"name\": \"Rolled oats\",\n \"unit_type\": \"cups\",\n \"quantity\": 1.0\n },\n {\n \"name\": \"Brown sugar\",\n \"unit_type\": \"cups\",\n \"quantity\": 0.5\n },\n {\n \"name\": \"Granulated sugar\",\n \"unit_type\": \"cups\",\n \"quantity\": 0.25\n },\n {\n \"name\": \"Large eggs\",\n \"unit_type\": \"pieces\",\n \"quantity\": 1.0\n },\n {\n \"name\": \"Melted butter\",\n \"unit_type\": \"tablespoons\",\n \"quantity\": 1.0\n },\n {\n \"name\": \"Steamed broccoli florets\",\n \"unit_type\": \"cups\",\n \"quantity\": 0.5\n },\n {\n \"name\": \"Vanilla extract\",\n \"unit_type\": \"teaspoons\",\n \"quantity\": 1.0\n }\n ],\n \"instructions\": \"Preheat oven to 375\u00b0F (190\u00b0C). Line a baking sheet with parchment paper. In a medium bowl, whisk together flour, oats, brown sugar, and granulated sugar. In a large bowl, whisk together eggs, melted butter, steamed broccoli florets, and vanilla extract. Add the dry ingredients to the wet ingredients and stir until combined. Scoop tablespoon-sized balls of dough onto the prepared baking sheet, leaving 2 inches of space between each cookie. Bake for 10-12 minutes or until lightly golden brown.\"\n}\n```\n",
"bugtrack_url": null,
"license": "Copyright (c) 2024 Stratus Advanced Technologies and Contributors.\n \n Permission is hereby granted, free of charge, to any person\n obtaining a copy of this software and associated documentation\n files (the \"Software\"), to deal in the Software without\n restriction, including without limitation the rights to use,\n copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the\n Software is furnished to do so, subject to the following\n conditions:\n \n The above copyright notice and this permission notice shall be\n included in all copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES\n OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\n OTHER DEALINGS IN THE SOFTWARE.",
"summary": "Intelligence Bot Framework",
"version": "0.9.2",
"project_urls": null,
"split_keywords": [
"dandy",
" ai",
" llm",
" agent",
" prompt",
" gpt",
" bot",
" workflow",
" automation",
" artificial intelligence"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5df913e098e01b9e8af702475d068b3830fdbcb07790764717b8962b411f6eb0",
"md5": "7147f1c2e83f4e6f579d13d18061536d",
"sha256": "11d8adee93b5764fe8a3a2255d44e529569a31b85fada44fd777402fd85ab53f"
},
"downloads": -1,
"filename": "dandy-0.9.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7147f1c2e83f4e6f579d13d18061536d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 44271,
"upload_time": "2025-02-10T18:39:31",
"upload_time_iso_8601": "2025-02-10T18:39:31.769896Z",
"url": "https://files.pythonhosted.org/packages/5d/f9/13e098e01b9e8af702475d068b3830fdbcb07790764717b8962b411f6eb0/dandy-0.9.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ade95704737650b34dbaea6bdc5c818d774292efa57b58c04425a604d8e0888c",
"md5": "961fccc13e68a60d090cf22da422188e",
"sha256": "368c055ed850f7924b6569fffb6dae443d52d9a747da5668366eff54e8dc8930"
},
"downloads": -1,
"filename": "dandy-0.9.2.tar.gz",
"has_sig": false,
"md5_digest": "961fccc13e68a60d090cf22da422188e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 35074,
"upload_time": "2025-02-10T18:39:33",
"upload_time_iso_8601": "2025-02-10T18:39:33.421600Z",
"url": "https://files.pythonhosted.org/packages/ad/e9/5704737650b34dbaea6bdc5c818d774292efa57b58c04425a604d8e0888c/dandy-0.9.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-10 18:39:33",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "dandy"
}