lasagna-ai


Namelasagna-ai JSON
Version 0.6.2 PyPI version JSON
download
home_pageNone
SummaryLayered agents!
upload_time2024-06-23 17:46:51
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords agent agents ai hierarchical layered layers llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![Lasagna AI Logo](https://raw.githubusercontent.com/Rhobota/lasagna-ai/main/logos/lasagna-ai.png)

# Lasagna AI

[![PyPI - Version](https://img.shields.io/pypi/v/lasagna-ai.svg)](https://pypi.org/project/lasagna-ai)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lasagna-ai.svg)](https://pypi.org/project/lasagna-ai)
![Test Status](https://github.com/Rhobota/lasagna-ai/actions/workflows/test.yml/badge.svg?branch=main)

- 🥞  **Layered agents!**
  - Agents for your agents!
  - Tool-use and layering FTW 💪
  - Ever wanted a _recursive_ agent? Now you can have one! 🤯
  - _Parallel_ tool-calling by default.
  - Fully asyncio.
  - 100% Python type hints.
  - Functional-style 😎
  - (optional) Easy & pluggable caching! 🏦

- 🚣  **Streamable!**
  - Event streams for _everything_.
  - Asyncio generators are awesome.

- 🗃️ **Easy database integration!**
  - Don't rage when trying to store raw messages and token counts. 😡 🤬
  - Yes, you _can_ have _both_ streaming and easy database storage.

- ↔️ **Provider/model agnostic and interoperable!**
  - Native support for [OpenAI](https://platform.openai.com/docs/models), [Anthropic](https://docs.anthropic.com/en/docs/welcome), [NVIDIA NIM/NGC](https://build.nvidia.com/explore/reasoning) (+ more to come).
  - Message representations are canonized. 😇
  - Supports vision!
  - Easily build committees!
  - Swap providers or models mid-conversation.
  - Delegate tasks among model providers or model sizes.
  - Parallelize all the things.

-----

## Table of Contents

- [Installation](#installation)
- [Used By](#used-by)
- [Quickstart](#quickstart)
- [Debug Logging](#debug-logging)
- [Special Thanks](#special-thanks)
- [License](#license)

## Installation

```console
pip install -U lasagna-ai[openai,anthropic]
```

## Used By

Lasagna is used in production by:

[![AutoAuto](https://raw.githubusercontent.com/Rhobota/lasagna-ai/main/logos/autoauto.png)](https://www.autoauto.ai/)

## Quickstart

Here is the _most simple_ agent (it doesn't add *anything* to the underlying model).
More complex agents would add tools and/or use layers of agents, but not this one!
Anyway, run it in your terminal and you can chat interactively with the model. 🤩

```python
from lasagna import (
    bind_model,
    recursive_extract_messages,
    flat_messages,
)

from lasagna.tui import (
    tui_input_loop,
)

import asyncio


@bind_model('openai', 'gpt-3.5-turbo-0125')
async def most_simple_agent(model, event_callback, prev_runs):
    messages = recursive_extract_messages(prev_runs)
    tools = []
    new_messages = await model.run(event_callback, messages, tools)
    return flat_messages(new_messages)


async def main():
    system_prompt = "You are grumpy."
    await tui_input_loop(most_simple_agent, system_prompt)


if __name__ == '__main__':
    asyncio.run(main())
```

The code above does _not_ use Python type hints (lame! 👎). As agents get
more complex, and you end up with nested data structures and
agents that call other agents, we promise that type hints will
be your best friend. So,
we suggest you use type hints from day 1! Below is the same example, but with
type hints. Use `mypy` or `pyright` to check your code (because type hints are
useless unless you have a tool that checks them).

```python
from lasagna import (
    bind_model,
    recursive_extract_messages,
    flat_messages,
)

from lasagna.tui import (
    tui_input_loop,
)

from lasagna.types import (
    Model,
    EventCallback,
    AgentRun,
)

from typing import List, Callable

import asyncio


@bind_model('openai', 'gpt-3.5-turbo-0125')
async def most_simple_agent(
    model: Model,
    event_callback: EventCallback,
    prev_runs: List[AgentRun],
) -> AgentRun:
    messages = recursive_extract_messages(prev_runs)
    tools: List[Callable] = []
    new_messages = await model.run(event_callback, messages, tools)
    return flat_messages(new_messages)


async def main() -> None:
    system_prompt = "You are grumpy."
    await tui_input_loop(most_simple_agent, system_prompt)


if __name__ == '__main__':
    asyncio.run(main())
```

## Debug Logging

This library logs using Python's builtin `logging` module. It logs mostly to `INFO`, so here's a snippet of code you can put in _your_ app to see those traces:

```python
import logging

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
)

# ... now use Lasagna as you normally would, but you'll see extra log traces!
```

## Special Thanks

Special thanks to those who inspired this library:
- Numa Dhamani (buy her book: [Introduction to Generative AI](https://a.co/d/03dHnRmX))
- Dave DeCaprio's [voice-stream library](https://github.com/DaveDeCaprio/voice-stream)

## License

`lasagna-ai` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.

## Joke Acronym

Layered Agents with toolS And aGeNts and Ai

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lasagna-ai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "agent, agents, ai, hierarchical, layered, layers, llm",
    "author": null,
    "author_email": "Ryan Henning <ryan@rhobota.com>",
    "download_url": "https://files.pythonhosted.org/packages/22/18/858bad6e2f0acfd5f5ec714d942c959a4f3d3d519d291f0d45c8bb10539a/lasagna_ai-0.6.2.tar.gz",
    "platform": null,
    "description": "![Lasagna AI Logo](https://raw.githubusercontent.com/Rhobota/lasagna-ai/main/logos/lasagna-ai.png)\n\n# Lasagna AI\n\n[![PyPI - Version](https://img.shields.io/pypi/v/lasagna-ai.svg)](https://pypi.org/project/lasagna-ai)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lasagna-ai.svg)](https://pypi.org/project/lasagna-ai)\n![Test Status](https://github.com/Rhobota/lasagna-ai/actions/workflows/test.yml/badge.svg?branch=main)\n\n- \ud83e\udd5e  **Layered agents!**\n  - Agents for your agents!\n  - Tool-use and layering FTW \ud83d\udcaa\n  - Ever wanted a _recursive_ agent? Now you can have one! \ud83e\udd2f\n  - _Parallel_ tool-calling by default.\n  - Fully asyncio.\n  - 100% Python type hints.\n  - Functional-style \ud83d\ude0e\n  - (optional) Easy & pluggable caching! \ud83c\udfe6\n\n- \ud83d\udea3  **Streamable!**\n  - Event streams for _everything_.\n  - Asyncio generators are awesome.\n\n- \ud83d\uddc3\ufe0f **Easy database integration!**\n  - Don't rage when trying to store raw messages and token counts. \ud83d\ude21 \ud83e\udd2c\n  - Yes, you _can_ have _both_ streaming and easy database storage.\n\n- \u2194\ufe0f **Provider/model agnostic and interoperable!**\n  - Native support for [OpenAI](https://platform.openai.com/docs/models), [Anthropic](https://docs.anthropic.com/en/docs/welcome), [NVIDIA NIM/NGC](https://build.nvidia.com/explore/reasoning) (+ more to come).\n  - Message representations are canonized. \ud83d\ude07\n  - Supports vision!\n  - Easily build committees!\n  - Swap providers or models mid-conversation.\n  - Delegate tasks among model providers or model sizes.\n  - Parallelize all the things.\n\n-----\n\n## Table of Contents\n\n- [Installation](#installation)\n- [Used By](#used-by)\n- [Quickstart](#quickstart)\n- [Debug Logging](#debug-logging)\n- [Special Thanks](#special-thanks)\n- [License](#license)\n\n## Installation\n\n```console\npip install -U lasagna-ai[openai,anthropic]\n```\n\n## Used By\n\nLasagna is used in production by:\n\n[![AutoAuto](https://raw.githubusercontent.com/Rhobota/lasagna-ai/main/logos/autoauto.png)](https://www.autoauto.ai/)\n\n## Quickstart\n\nHere is the _most simple_ agent (it doesn't add *anything* to the underlying model).\nMore complex agents would add tools and/or use layers of agents, but not this one!\nAnyway, run it in your terminal and you can chat interactively with the model. \ud83e\udd29\n\n```python\nfrom lasagna import (\n    bind_model,\n    recursive_extract_messages,\n    flat_messages,\n)\n\nfrom lasagna.tui import (\n    tui_input_loop,\n)\n\nimport asyncio\n\n\n@bind_model('openai', 'gpt-3.5-turbo-0125')\nasync def most_simple_agent(model, event_callback, prev_runs):\n    messages = recursive_extract_messages(prev_runs)\n    tools = []\n    new_messages = await model.run(event_callback, messages, tools)\n    return flat_messages(new_messages)\n\n\nasync def main():\n    system_prompt = \"You are grumpy.\"\n    await tui_input_loop(most_simple_agent, system_prompt)\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n```\n\nThe code above does _not_ use Python type hints (lame! \ud83d\udc4e). As agents get\nmore complex, and you end up with nested data structures and\nagents that call other agents, we promise that type hints will\nbe your best friend. So,\nwe suggest you use type hints from day 1! Below is the same example, but with\ntype hints. Use `mypy` or `pyright` to check your code (because type hints are\nuseless unless you have a tool that checks them).\n\n```python\nfrom lasagna import (\n    bind_model,\n    recursive_extract_messages,\n    flat_messages,\n)\n\nfrom lasagna.tui import (\n    tui_input_loop,\n)\n\nfrom lasagna.types import (\n    Model,\n    EventCallback,\n    AgentRun,\n)\n\nfrom typing import List, Callable\n\nimport asyncio\n\n\n@bind_model('openai', 'gpt-3.5-turbo-0125')\nasync def most_simple_agent(\n    model: Model,\n    event_callback: EventCallback,\n    prev_runs: List[AgentRun],\n) -> AgentRun:\n    messages = recursive_extract_messages(prev_runs)\n    tools: List[Callable] = []\n    new_messages = await model.run(event_callback, messages, tools)\n    return flat_messages(new_messages)\n\n\nasync def main() -> None:\n    system_prompt = \"You are grumpy.\"\n    await tui_input_loop(most_simple_agent, system_prompt)\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n```\n\n## Debug Logging\n\nThis library logs using Python's builtin `logging` module. It logs mostly to `INFO`, so here's a snippet of code you can put in _your_ app to see those traces:\n\n```python\nimport logging\n\nlogging.basicConfig(\n    level=logging.INFO,\n    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',\n)\n\n# ... now use Lasagna as you normally would, but you'll see extra log traces!\n```\n\n## Special Thanks\n\nSpecial thanks to those who inspired this library:\n- Numa Dhamani (buy her book: [Introduction to Generative AI](https://a.co/d/03dHnRmX))\n- Dave DeCaprio's [voice-stream library](https://github.com/DaveDeCaprio/voice-stream)\n\n## License\n\n`lasagna-ai` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.\n\n## Joke Acronym\n\nLayered Agents with toolS And aGeNts and Ai\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Layered agents!",
    "version": "0.6.2",
    "project_urls": {
        "Documentation": "https://github.com/Rhobota/lasagna-ai#readme",
        "Issues": "https://github.com/Rhobota/lasagna-ai/issues",
        "Source": "https://github.com/Rhobota/lasagna-ai"
    },
    "split_keywords": [
        "agent",
        " agents",
        " ai",
        " hierarchical",
        " layered",
        " layers",
        " llm"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "83225066ae74e1a27cb4040681a3ab6d3e6e1052cbb5b9e5ffb91dd8e54c44e2",
                "md5": "fd8b2f5bc884de1d61ddbdc4f53029ff",
                "sha256": "aa1c02be88f48656c1eae852fb0228fb0a1bf541325fd8036738d6035824c541"
            },
            "downloads": -1,
            "filename": "lasagna_ai-0.6.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fd8b2f5bc884de1d61ddbdc4f53029ff",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 33083,
            "upload_time": "2024-06-23T17:46:50",
            "upload_time_iso_8601": "2024-06-23T17:46:50.278939Z",
            "url": "https://files.pythonhosted.org/packages/83/22/5066ae74e1a27cb4040681a3ab6d3e6e1052cbb5b9e5ffb91dd8e54c44e2/lasagna_ai-0.6.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2218858bad6e2f0acfd5f5ec714d942c959a4f3d3d519d291f0d45c8bb10539a",
                "md5": "b938de47618d9cf28610f12eec2e1b1c",
                "sha256": "adeeed78d9a0fb594ef5360d26d1e832054de1a25f5b6dfacf9e6d090f2006b2"
            },
            "downloads": -1,
            "filename": "lasagna_ai-0.6.2.tar.gz",
            "has_sig": false,
            "md5_digest": "b938de47618d9cf28610f12eec2e1b1c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 81833,
            "upload_time": "2024-06-23T17:46:51",
            "upload_time_iso_8601": "2024-06-23T17:46:51.891655Z",
            "url": "https://files.pythonhosted.org/packages/22/18/858bad6e2f0acfd5f5ec714d942c959a4f3d3d519d291f0d45c8bb10539a/lasagna_ai-0.6.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-23 17:46:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Rhobota",
    "github_project": "lasagna-ai#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "lasagna-ai"
}
        
Elapsed time: 0.30889s