grafo-ai-tools


Namegrafo-ai-tools JSON
Version 0.1.3 PyPI version JSON
download
home_pageNone
SummaryA set of tools for easily interacting with LLMs.
upload_time2025-09-03 23:43:33
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT
keywords ai agents llm workflows
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Install
```
uv add grafo-ai-tools
```

# WHAT
A set of tools for easily interacting with LLMs.

# WHY
Building AI-driven software leans upon a number of utilities, such as prompt building and LLM calling via HTTP requests. Additionally, writing agents and workflows can prove particularly challenging using conventional code structures.

# HOW
This simple library offers a set of predefined functions for:
- Easy prompting - you need only provide a path
- Calling LLMs - instructor takes care of that for us
- Modifying response models - we use Pydantic (duh)

Additionally, we provide `grafo` out of the box for convenient workflow building.

## About Grafo
Grafo (see Recommended Docs below) is a library for building executable DAGs where each node contains a coroutine. Since the DAG abstraction fits particularly well into AI-driven building, we have provided the `BaseWorkflow` class with the following methods:
- `task` for LLM calling
- `redirect` to help you manage redirections in your `grafo` workflows

# Examples
### Simple text:
```python
from ait import AIT

ait = AIT("gpt-5")
path = "./prompt.md"
response = ait.chat(path)
print(response.completion)
print(response.content)
```

### Structured response:
```python
from ait import AIT
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

ait = AIT("gpt-5")
path = "./prompt.md" # PROMPT: {{ message }}
message = "I want to buy 5 apples"
response = ait.asend(response_model=Fruit, path=path, message=message)
```

### Structured response with model type injection:
```python
from ait import AIT
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

ait = AIT("gpt-5")
path = "./prompt.md" # PROMPT: {{ message }}
message = "I want to buy 5 apples"
available_fruits = ["apple", "banana", "orange"]
FruitModel = ait.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])
response = ait.asend(response_model=Purchase, path=path, message=message)
```

### Simple workflow:
```python
from ait import AIT, BaseWorkflow, Node
from pydantic import BaseModel

class Purchase(BaseModel):
    product: str
    quantity: int

class Eval(BaseModel):
    is_valid: bool
    reasoning: str
    humanized_failure_reason: str | None

ait = AIT("gpt-5")
prompts_path = "./"
message = "I want to buy 5 apples"
available_fruits = ["apple", "banana", "orange"]
FruitModel = ait.inject_types(Purchase, [
    ("product", Literal[tuple(available_fruits)])
])

class PurchaseWorkflow(BaseWorkflow):
    def __init__(...):
        ...

    async def run(self, message) -> Purchase:
        purchase_node = Node[FruitModel](
            uuid="fruit purchase node"
            coroutine=self.task
            kwargs=dict(
                path=f"{prompts_path}/purchase.md"
                response_model=FruitModel
                message=message
            )
        )
        validation_node = Node[Eval](
            uuid="purchase eval node"
            coroutine=self.task
            kwargs=dict(
                path=f"{prompts_path}/eval.md"
                response_model=Eval
                message=message
                purchase=lambda: purchase_node.output
            )
        )
        eval_node.on_after_run = (
            self.redirect,
            dict(
                source_node=purchase_node
                validation_node=validation_node
            )
        )
        await purchase_node.connect(validation_node)
        executor = TreeExecutor(uuid="Purchase Workflow", roots=[purchase_node])
        await executor.run()

        if not purchase_node.output or not validation_node.output.is_valid:
            raise ValueError("Purchase failed.")

        return purchase_node.output
```

## Recommended Docs
- `instructor` https://python.useinstructor.com/
- `jinja2` https://jinja.palletsprojects.com/en/stable/
- `pydantic` https://docs.pydantic.dev/latest/
- `grafo` https://github.com/paulomtts/grafo

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "grafo-ai-tools",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "ai, agents, llm, workflows",
    "author": null,
    "author_email": "\"@paulomtts\" <paulomtts@outlook.com>",
    "download_url": "https://files.pythonhosted.org/packages/37/fb/b3e41ef48ed7ef778b3ff11c6468daf32ba8670532cc1b723cd23a354ccc/grafo_ai_tools-0.1.3.tar.gz",
    "platform": null,
    "description": "# Install\n```\nuv add grafo-ai-tools\n```\n\n# WHAT\nA set of tools for easily interacting with LLMs.\n\n# WHY\nBuilding AI-driven software leans upon a number of utilities, such as prompt building and LLM calling via HTTP requests. Additionally, writing agents and workflows can prove particularly challenging using conventional code structures.\n\n# HOW\nThis simple library offers a set of predefined functions for:\n- Easy prompting - you need only provide a path\n- Calling LLMs - instructor takes care of that for us\n- Modifying response models - we use Pydantic (duh)\n\nAdditionally, we provide `grafo` out of the box for convenient workflow building.\n\n## About Grafo\nGrafo (see Recommended Docs below) is a library for building executable DAGs where each node contains a coroutine. Since the DAG abstraction fits particularly well into AI-driven building, we have provided the `BaseWorkflow` class with the following methods:\n- `task` for LLM calling\n- `redirect` to help you manage redirections in your `grafo` workflows\n\n# Examples\n### Simple text:\n```python\nfrom ait import AIT\n\nait = AIT(\"gpt-5\")\npath = \"./prompt.md\"\nresponse = ait.chat(path)\nprint(response.completion)\nprint(response.content)\n```\n\n### Structured response:\n```python\nfrom ait import AIT\nfrom pydantic import BaseModel\n\nclass Purchase(BaseModel):\n    product: str\n    quantity: int\n\nait = AIT(\"gpt-5\")\npath = \"./prompt.md\" # PROMPT: {{ message }}\nmessage = \"I want to buy 5 apples\"\nresponse = ait.asend(response_model=Fruit, path=path, message=message)\n```\n\n### Structured response with model type injection:\n```python\nfrom ait import AIT\nfrom pydantic import BaseModel\n\nclass Purchase(BaseModel):\n    product: str\n    quantity: int\n\nait = AIT(\"gpt-5\")\npath = \"./prompt.md\" # PROMPT: {{ message }}\nmessage = \"I want to buy 5 apples\"\navailable_fruits = [\"apple\", \"banana\", \"orange\"]\nFruitModel = ait.inject_types(Purchase, [\n    (\"product\", Literal[tuple(available_fruits)])\n])\nresponse = ait.asend(response_model=Purchase, path=path, message=message)\n```\n\n### Simple workflow:\n```python\nfrom ait import AIT, BaseWorkflow, Node\nfrom pydantic import BaseModel\n\nclass Purchase(BaseModel):\n    product: str\n    quantity: int\n\nclass Eval(BaseModel):\n    is_valid: bool\n    reasoning: str\n    humanized_failure_reason: str | None\n\nait = AIT(\"gpt-5\")\nprompts_path = \"./\"\nmessage = \"I want to buy 5 apples\"\navailable_fruits = [\"apple\", \"banana\", \"orange\"]\nFruitModel = ait.inject_types(Purchase, [\n    (\"product\", Literal[tuple(available_fruits)])\n])\n\nclass PurchaseWorkflow(BaseWorkflow):\n    def __init__(...):\n        ...\n\n    async def run(self, message) -> Purchase:\n        purchase_node = Node[FruitModel](\n            uuid=\"fruit purchase node\"\n            coroutine=self.task\n            kwargs=dict(\n                path=f\"{prompts_path}/purchase.md\"\n                response_model=FruitModel\n                message=message\n            )\n        )\n        validation_node = Node[Eval](\n            uuid=\"purchase eval node\"\n            coroutine=self.task\n            kwargs=dict(\n                path=f\"{prompts_path}/eval.md\"\n                response_model=Eval\n                message=message\n                purchase=lambda: purchase_node.output\n            )\n        )\n        eval_node.on_after_run = (\n            self.redirect,\n            dict(\n                source_node=purchase_node\n                validation_node=validation_node\n            )\n        )\n        await purchase_node.connect(validation_node)\n        executor = TreeExecutor(uuid=\"Purchase Workflow\", roots=[purchase_node])\n        await executor.run()\n\n        if not purchase_node.output or not validation_node.output.is_valid:\n            raise ValueError(\"Purchase failed.\")\n\n        return purchase_node.output\n```\n\n## Recommended Docs\n- `instructor` https://python.useinstructor.com/\n- `jinja2` https://jinja.palletsprojects.com/en/stable/\n- `pydantic` https://docs.pydantic.dev/latest/\n- `grafo` https://github.com/paulomtts/grafo\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A set of tools for easily interacting with LLMs.",
    "version": "0.1.3",
    "project_urls": {
        "Homepage": "https://github.com/paulomtts/Grafo-AI-Tools.git"
    },
    "split_keywords": [
        "ai",
        " agents",
        " llm",
        " workflows"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d29f128cab2ebb205ddf5dc90c8da1c210f2b683efcf9847670300a0c69c6ad9",
                "md5": "72c824e473f497c537272b1c68045d01",
                "sha256": "8d53d6430de309f944cec89732a2544d204dbe7872842a3cf1964ef2b6121c32"
            },
            "downloads": -1,
            "filename": "grafo_ai_tools-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "72c824e473f497c537272b1c68045d01",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 14435,
            "upload_time": "2025-09-03T23:43:32",
            "upload_time_iso_8601": "2025-09-03T23:43:32.333728Z",
            "url": "https://files.pythonhosted.org/packages/d2/9f/128cab2ebb205ddf5dc90c8da1c210f2b683efcf9847670300a0c69c6ad9/grafo_ai_tools-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "37fbb3e41ef48ed7ef778b3ff11c6468daf32ba8670532cc1b723cd23a354ccc",
                "md5": "b641d64edca878e7aa82debcb2746ed6",
                "sha256": "9f5a6f6b576553c1c026d47115403785c3fc955f13437b37be89766d616b61cd"
            },
            "downloads": -1,
            "filename": "grafo_ai_tools-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "b641d64edca878e7aa82debcb2746ed6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 11782,
            "upload_time": "2025-09-03T23:43:33",
            "upload_time_iso_8601": "2025-09-03T23:43:33.214157Z",
            "url": "https://files.pythonhosted.org/packages/37/fb/b3e41ef48ed7ef778b3ff11c6468daf32ba8670532cc1b723cd23a354ccc/grafo_ai_tools-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-03 23:43:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "paulomtts",
    "github_project": "Grafo-AI-Tools",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "grafo-ai-tools"
}
        
Elapsed time: 0.73558s