42towels


Name42towels JSON
Version 0.1.1011 PyPI version JSON
download
home_pageNone
Summaryan essential item for any intergalactic hitchhiker
upload_time2024-07-20 18:19:53
maintainerNone
docs_urlNone
authortolitius
requires_python<4.0,>=3.9
licenseMIT
keywords ai agents llm function calling planning api code generation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            > _"a **towel** is about the most massively useful thing an interstellar AI hitchhiker can have"</br>
> -- Douglas Adams_

# towel <img src="docs/img/towel-logo.png" width="75px">
![PyPI](https://img.shields.io/pypi/v/PACKAGE?label=pypi%2042towels)

compose LLM python functions into dynamic, self-modifying plans

```python
space_trip = plan([

    step(pick_planet),

    pin('are_you_ready'),
    step(how_ready_are_you),
    route(lambda result: 'book' if result['how_ready_are_you']['score'] > 95 else 'train'),

    pin('train'),
    step(space_bootcamp),
    route(lambda x: 'are_you_ready'),

    pin('book'),
    step(reserve_spaceship)
])
```

- [why towel](#why-towel)
- [features](#features)
- [how to play](#how-to-play)
  - [install](#install)
    - [run examples](#run-examples)
  - [LLMs with no towels](#llms-with-no-towels)
    - [LLM libraries and frameworks are unnecessary](#llm-libraries-and-frameworks-are-unnecessary)
    - [basics](#basics)
    - [connect to LLM](#connect-to-llm)
    - [ask LLM a question](#ask-llm-a-question)
    - [stream responses from LLM](#stream-responses-from-llm)
    - [use the built in instructor](#use-the-built-in-instructor)
    - [using tools (a.k.a. function calling)](#using-tools-aka-function-calling)
  - [towels](#towels)
    - [@towel, thinker, intel, tow](#towel-thinker-intel-tow)
    - [a more practical example](#a-more-practical-example)
  - [plans](#plans)
    - [vocabulary](#vocabulary)
      - [step](#step)
      - [pin](#pin)
      - [route](#route)
    - [flow and data](#flow-and-data)
    - [executing a plan](#executing-a-plan)
    - [mind maps](#mind-maps)
    - [kick off intel](#kick-off-intel)
  - [making plans](#making-plans)
  - [plans that make plans](#plans-that-make-plans)
- [license](#license)

## why towel

the name comes from the Hitchhiker's Guide to the Galaxy</br>
where [a towel is](https://en.wikipedia.org/wiki/Towel_Day) the most massively useful thing an interstellar AI hitchhiker can have.

this ultimate truth applies to all the universes including the one full of Large Language Models (a.k.a. LLMs)

any Python function wrapped in a `@towel` becomes the most massively useful and unlocks the power of LLMs:

```python
@towel
def find_meaning_of_life():
  llm, *_ = tow()             ## tows llm into this function.. and more
  llm.think("... about it")
```

since this is just a function it can be composed with other LLM, or not, functions leaving it to just Python and you to create things.

but, in case help composing is needed, towel can assist with making plans that use @towels (i.e. these functions):

```python
plan([

  step(find_meaning_of_life),
  route(lambda result: 'conclude' if result['find_meaning_of_life']['confidence'] > 0.8 else 'test meaning'),

  pin('test meaning'),
  step(reality_check),
  route(lambda x: 'find_meaning_of_life'),

  pin('conclude')
])
```

# features

### plan it like you mean it!
- more powerful than chains, simpler than graphs
- self-modifying plans (plans that make plans)
- simple vocabulary: "`step`", "`route`" and "`pin`" for any plan
- mind maps: each step can have its very own LLM
- dynamic routing based on pure functions and step results

### functions over objects
- function compose.. into elegant workflows
- any function can become an LLM: just wrap it in a `@towel`

### function calling / tool use
- one interface for local models and cloud models
- it's great, it's [pydantic](https://github.com/pydantic/pydantic)

### strong LLM response typing
- it's great, it's pydantic
- built-in support for the [instructor](https://github.com/jxnl/instructor) library, enabling structured outputs
- "[DeepThought](https://github.com/tolitius/towel/blob/6caa70312a3715da7adae89149d6d1ab684a2c37/src/towel/brain/base.py#L8-L23)" full with thoughts for non instructor responses

### thread safety
- dynamic `@towel` context handling
- modify context at runtime "`with intel(llm=llm)`"

### multi-model support
- one "`thinker`" API for all
- switch between different LLM providers (Claude, Ollama, etc.)
- extensible to support additional providers via "[Brain](https://github.com/tolitius/towel/blob/6caa70312a3715da7adae89149d6d1ab684a2c37/src/towel/brain/base.py#L25)"

### local models love
- feature parity with cloud models via Ollama integration

# how to play

let's look around and travel them universes one by one. towel by towel.

## install

the "[Answer to the Ultimate Question of Life, the Universe, and Everything](https://simple.wikipedia.org/wiki/42_(answer))" is 42.

hence in order to harness "the power of the towel" we need to install 42 of them:

```bash
pip install 42towels
```

### run examples

there are examples in [docs/examples](docs/examples) that, after "pip install 42towels", can be run as:

```python
$ python ./docs/examples/function_caller.py -p anthropic -m claude-3-haiku-20240307
```
or
```
$ python ./docs/examples/function_caller.py -m llama3:70b  ## will use Ollama by default
```

## LLMs with no towels

### LLM libraries and frameworks are unnecessary

the hidden truth of every LLM library or framework is that most of the time _**you don't need an LLM library or framework**_, because it all comes down to a simple sequence of two steps:

* come up with a question (a.k.a. "a prompt"): e.g. "what is the meaning of life? think step by step"
* call an HTTP API
```bash
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt": "what is the meaning of life? think step by step"
}'
```
that is pretty much it.</br>
you do it over and over again, it would be called a "chat"</br>
you do it with Generative Pre-trained Transformer models, this chat would be called "chat gpt" (i.e. you _own_ chat gpt).

### basics

where libraries can help is _consistency_ and _repeatability_ which really enhances and helps composing things, such as code things.

in the world of LLMs most of these HTTP APIs and their capabilities are very inconsistent, which is why libraries such as [litellm](https://github.com/BerriAI/litellm) and others help a lot.

towel also aims to provide consistency across models, so it is important to understand the basics: simple ways to engage LLMs without `@towel`s or plans.

### connect to LLM

"`thinker`" is the one with the power to connect to LLMs

```python
from towel import thinker

# would connect to Anthropic's Claude LLM
# it would expect you to have an anthropic key exported in .env:
# export ANTHROPIC_API_KEY=sk-an...

llm = thinker.Claude(model="claude-3-haiku-20240307")


# would connect to any local model hosted by Ollama
# it would expect you to have Ollama running at "http://localhost:11434"
# but a different url can be passed in as well

llm = thinker.Ollama(model="llama3:latest")
```

### ask LLM a question

we'll take examples from [docs/examples/thinking.py](docs/examples/thinking.py)

```python
thoughts = llm.think(prompt="what is the meaning of life? think step by step")
```

thinker would return a DeepThought:

```python
>>> type(thoughts)
<class 'towel.brain.base.DeepThought'>
```

which would look like this:

```python
DeepThought(id='9a7a0f12-ffaa-96bd-aa5e-51362dd2c06e',
            content=[TextThought(text='What a profound and complex question! The meaning of life is ...and wondrous journey called existence.',
                                 type='text')],
            tokens_used=820,
            model='llama3:latest',
            stop_reason='stop')
```

so if you are just after the text it can be extracted as:

```python
>>> thoughts.content[0].text
'What a profound and complex questi... called existence.'
```

### stream responses from LLM

two choices:

"manually":
```python
for chunk in llm.think(prompt="what is the meaning of life? think step by step",
                       stream=True):

    print(chunk, end='', flush=True)
```

or with "thinker"'s help:
```python
from towel.tools import stream

stream(llm.think(prompt="what is the meaning of life? think step by step",
                 stream=True),
```

### use the built in instructor

LLMs are not very good at being.. consistent. This is great for creative writing, but not that great for relying on responses to be formatted in a particular way: schema, type, shape, etc..

this is where [instructor](https://github.com/jxnl/instructor) comes in helping to ensure LLM responses are strongly typed

towel has a built in instructor that can be engaged by passing a "`response_model`" argument with the desired typed output

for example:

```python
from pydantic import BaseModel

class MeaningOfLife(BaseModel):
    meaning: str
    confidence_level: float

thoughts = llm.think(prompt="what is the meaning of life? think step by step",
                     response_model=MeaningOfLife)
```

now thinker will rely on instructor to return the response as the MeaningOfLife type:

```python
>>> thoughts
MeaningOfLife(meaning="I'll do my best to help you explore this existential question!...",
              confidence_level=7.5)
```

### using tools (a.k.a. function calling)

quite a popular topic in LLM circles.

this capability is about asking LLM a question, and also providing it a list of well defined tools (functions) LLM can _decide_ to call instead of answering the question, based on its own knowledge.

one important aspect to understand is: LLM does _**not**_ call functions or tools, but merely responds with a tool name (or several) and its arguments.

here is an example.

let's say we have "a tool" that checks the weather (the most frequent tool that is used as an example on this topic):

```python
def check_current_weather(location, unit="fahrenheit"):
    """lookup the current weather in a given location"""
    if "tokyo" in location.lower():
        return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
    elif "new york" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "72", "unit": unit})
    elif "paris" in location.lower():
        return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})
```

LLMs do not have an up-to date knowledge about the weather, hence if we ask an LLM "what's the weather like in Tokyo?", it would not know, and would usually respond what the weather in Tokiyo like at different times of year. but..

this is where tools come handy.

define a tool/function schema:

```python
tools = [
      {
        "name": "check_current_weather",
        "description": "checks the current weather in a given location",
        "input_schema": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The city and state, e.g. New York, NY"
            },
            "unit": {
              "type": "string",
              "enum": ["celsius", "fahrenheit"],
              "description": "The unit of temperature to return, e.g. celsius or fahrenheit"
            }
          },
          "required": ["location"]
        }
      }]
```

and pass it to LLM:

```python
thoughts = llm.think(prompt="what's the weather like in Tokyo?",
                     tools=tools)
```

"`thinker`" would help a model to realize it does not "know" the answer to this question and would need to respond with the name of the tool to use and it arguments

and.. it does:

```python
>>> thoughts
DeepThought(id='9a9d2196-55b8-e252-8bfc-d9a82caaaf97',
            content=[TextThought(text='I can check the current weather in Tokyo for you!',
                                 type='text'),
                     ToolUseThought(id='9a9d2196-55b8-e252-8bfc-d9a82caaaf97',
                                    name='check_current_weather',
                                    input={'location': 'Tokyo, Japan', 'unit': 'celsius'},
                                    type='tool_use')],
                     tokens_used=None,
                     model='llama3:latest',
                     stop_reason='tool_use')
```

one thing to pay attention to is the "`stop_reason`", which, in case a model decided to use a tool, would be "`tool_use`"

"`thinker`" has a helper "`call_tools`" function that can unwrap DeepThought and call tools:

```python
>>> thinker.call_tools(thoughts,
...                    {"check_current_weather": check_current_weather})

calling tool: check_current_weather
[{'tool_id': '9a9d2196-55b8-e252-8bfc-d9a82caaaf97',
  'tool_name': 'check_current_weather',
  'input': {'location': 'Tokyo, Japan',
            'unit': 'celsius'},
  'result': '{"location": "Tokyo",
              "temperature": "10",
              "unit": "celsius"}'}]
```
----
the utility of "`thinker`" in all the cases above is **one single API** that would work for local models as well as non local models such as Claude, etc.

## towels

while [this](#llm-libraries-and-frameworks-are-unnecessary) is still very true, an ability to express LLM communication in just functions vs. "raw prompt + HTTP call"s allows for breaking complex problems into smaller pieces, and converting what could otherwise be inconsistent, repetitive sequence of commands, into beautiful function compositions.

let's work step by step to take a single Python function and "LLM enable" it, giving it some warmth by wrapping it in a @towel.

this function expects a JSON formatted article that it will then convert to markdown format: i.e. a normal, every day, programming task:
> _a full example lives in [docs/examples/wrap_it.py](docs/examples/wrap_it.py)._

```python
def convert_json_to_markdown(article: str) -> str:
    parsed = json.loads(article)
    md = []
    md.append(f"# {article.get('title', 'Untitled')}")
    ## ...

    return '\n'.join(md)
```

the problem is of course in corner cases, malformed JSON, adding / removing features, changing spelling, format, etc. as this function does not really generalize well for inputs it is unable to handle.

let's add some warmth to it: wrap it in a @towel:

```python
from towel import thinker, towel, tow, intel

@towel(prompts={"to_markdown": "convert this JSON {article} to markdown"})
def convert_json_to_markdown(article: str) -> str:

  llm, prompts, *_ = tow()
  thought = llm.think(prompts['to_markdown'].format(article=article))

  return thought.content[0].text
```

now as this function is warm (wrapped in a @towel), let's give it a go:

```python
llm = thinker.Ollama(model="llama3:latest")
# or
# llm = thinker.Claude(model="claude-3-haiku-20240307")

with intel(llm=llm):
    markdown = convert_json_to_markdown(json_article)

print(markdown)
```

and we see the exact same markdown that was produced by the first, non LLM, "cold" Python function.</br>
an interesting aspect about this @towel function is that it _generalizes_: it can convert a lot more JSON formats, and handle a lot more corner cases.

you can check out and run [docs/examples/wrap_it.py](docs/examples/wrap_it.py) to experiment with both.

### @towel, thinker, intel, tow

eeny, meeny, miny, moe..

looking at the example above it might not be obvious what "`intel`" and "`tow`" are doing.

"`intel`" is a function that sets up a thread local context for this "convert_json_to_markdown" function run</br>
and, in this case, it sets it up with an extra variable: "`llm`"

```python
with intel(llm=llm):
    markdown = convert_json_to_markdown(json_article)
```

which is later available inside this function via "`tow()`":
```python
@towel(prompts={"to_markdown": "convert this JSON {article} to markdown"})
def convert_json_to_markdown(article: str) -> str:

  llm, prompts, *_ = tow()
  ## ...
```

you can notice that "`tow()`" also makes "`prompts`" accessible.

"`prompts`" are, of course, optional and can be created inside the function, passed in, etc.</br>
at the end this is just a function, so anything Python goes.

### a more practical example

you can see a simple, but much more interesting example in [docs/examples/paper_summarizer.py](docs/examples/paper_summarizer.py)</br>
where a single @towel takes a link to  white paper, pulls it down from the web and does these 3:

```python
@towel(prompts={'main points': 'summarize the main points in this paper',
                'eli5':        'explain this paper like I\'m 5 years old',
                'issues':      'summarize issues that you can identify with ideas in this paper'})
def summarize_paper(url):
  ## ....
```
> [!NOTE]
_more examples in [docs/examples](docs/examples)_

## plans

* LLM communication with "`thinker`"</br>
and
* "`@towel`" function composition

allow towel to empower an LLM, or a collection of LLMs, to _**plan**_ their activities given one or more problems to solve.

### vocabulary

a "plan" is a sequence of steps, routes and pins:

#### step

a step is a single executable unit of a plan. it sounds more than it really is since it is just an arbitrary function, in most cases a @towel function.

example:

```python
from towel import step

def find_meaning_of_life():
  return {"meaning_of_life": 42}

>>> step(find_meaning_of_life)
<towel.guide.Step object at 0x1083e93d0>
```

by itself "`step`" is not very useful, but as a part of a "plan" it is essential.

#### pin

a pin is a marker, or a checkpoint in a plan. it does not do anything besides having an addressable _name_:

```python
from towel import pin

>>> pin("rock and roll")
<towel.guide.Pin object at 0x1083cc7d0>
```

it is later heavily used by "route", as well as it is really useful for debugging a plan flow.

#### route

a route is a conditional unit of a plan. whenever plan flow gets to a route, it runs a condition (i.e. checks things) and depending on that check the flow can be routed to any "pin".

in order to create a route, it needs to be given a function or lambda:

```python
from towel import route

>>> route(lambda result: 'conclude' if result['find_meaning_of_life']['confidence'] > 0.8 else 'test meaning')
<towel.guide.Route object at 0x108696e90>
```

which means:
* go to a "conclude" pin (a pin with name "conclude") iff a result from the "find_meaning_of_life" has a "confidence" key with a value greater than "0.8"
* otherwise go to "test meaning" (a pin with name "test meaning")

### flow and data

let's look a the real plan that is a sequence of steps, pins and routes:

> _a full example lives in [docs/examples/space_trip.py](docs/examples/space_trip.py)_

```python
from towel import step, route, pin, plan

space_trip = plan([

    step(pick_planet),

    pin('are_you_ready'),
    step(how_ready_are_you),
    route(lambda result: 'book' if result['how_ready_are_you']['score'] > 95 else 'train'),

    pin('train'),
    step(space_bootcamp),
    route(lambda x: 'are_you_ready'),

    pin('book'),
    step(reserve_spaceship)
])
```

it starts out with "`step(pick_planet)`" which would call a function `pick_planet`:

```python
@towel
def pick_planet():
    ## ...
    return {'destination': planets[choice].name}
```

notice that this function returns "`destination`". internally towel would hold on to the result from this function, and would make it available for all other functions via _arguments_.

then the flow reaches "`pin('are_you_ready')`". it does nothing, as pins _do_ nothing.

it then moves on to the "`step(how_ready_are_you)`" which calls a function `how_ready_are_you`:

```python
@towel
def how_ready_are_you(destination):
    ## ...
    return {'score': readiness.score}

```

notice that nothing inside the plan definition is passing any arguments into `how_ready_are_you`, but it does take a "`destination`" argument.<br/>
this destination argument will be passed (by name) from the internal plan context that _remembers all the return values from all the steps_ and makes them available as function arguments.

the flow then looks at the route:

```python
route(lambda result: 'book' if result['how_ready_are_you']['score'] > 95 else 'train')
```

which would:
* route the flow to the "`pin('book')`" iff the "`score`" value of the `how_ready_are_you` step is larger than `95`
* otherwise it would route to the "`pin('train')`"

the rest is of the flow uses the exact same concepts

> [!TIP]
> remember to return dictionaries from `@towel` functions that are part of the plan</br>
> as the **_keys_** from those dictionaries then matched to other step function **_arguments_** </br>
> and if it is a match values are bound / arguments are _passed_ by the flow

### executing a plan

as you can see in the example ([docs/examples/space_trip.py](docs/examples/space_trip.py)), a plan is executed by the "`thinker.plan()`" function:

```python
llm = thinker.Ollama(model="llama3:latest")
# llm = thinker.Claude(model="claude-3-haiku-20240307")

trip = thinker.plan(space_trip,
                    llm=llm)

say("trip is booked:", f"{json.dumps(trip['reserve_spaceship'], indent=2)}")
```

### mind maps

since plans have many steps, it might be needed to perform some steps with LLMs that are better suited for it.

by default a plan would execute all the steps with an LLM that was provided to it:

```python
blueprint = make_plan()

thinker.plan(blueprint,
             llm=llama)
```

in case some steps need to be done by different LLMs, a plan takes a "`mind_map`" argument:

```python
thinker.plan(blueprint,
             llm=llama,
             mind_map={"review_stories": claude},
             start_with={"requirements": requirements})
```

all the steps in this plan are going to be performed by a "llama" model, but a "review_stories" step will be done by "claude"

you can look at the full example in [docs/examples/execute_da_plan.py](docs/examples/execute_da_plan.py)<br/>
where a smaller "`llama3 8B`" takes requirements, creates user stories, but "`claude`" is the one who reviews these stories, and provides feedback:

```python
    return plan([

        step(create_stories),

        pin('review'),
        step(review_stories),   ## <<< this step will be done by Claude
        route(lambda result: 'revise' if result['review_stories']['quality_score'] < 0.8 else 'implement'),

        pin('revise'),
        step(revise_stories),
        route(lambda x: 'review'),

        pin('implement'),
        step(implement_code)
])
```

### kick off intel

plan is usually kicked off with initial data: a problem definition or a question

this is done via a "`start_with`" plan argument:

```python
thinker.plan(blueprint,
             llm=llama,
             start_with={"requirements": requirements})
```

and "`requirements`" would most likely be a function argument name in the first step in this plan.

## making plans

plan's clear [vocabulary](#vocabulary) and the fact the plan itself is a data structure enables LLMs to take in a problem<br/>
... and _create a plan_ to solve this problem:

```python
from towel import towel, tow
from towel.type import Plan
from towel.prompt import make_plan

@towel(prompts={'plan': """given this problem: {problem} {make_plan}"""})
def make_da_plan(problem: str):
    llm, prompts, *_ = tow()
    plan = llm.think(prompts['plan'].format(problem=problem,
                                            make_plan=make_plan),
                     response_model=Plan)
    return plan
```

this function takes a problem and creates a plan
> full example is in [docs/examples/make_da_plan.py](docs/examples/make_da_plan.py)

for example, here is a plan Claude created to..

```python
llm = Claude(model="claude-3-haiku-20240307")

with intel(llm=llm):
    plan = make_da_plan("make sure there are no wars")
```
```python
[
  step(analyze_current_global_conflicts),
  step(identify_root_causes),
  step(assess_diplomatic_relations),
  route(lambda result: 'improve_diplomacy' if result['assess_diplomatic_relations']['status'] == 'poor' else 'address_economic_factors'),

  pin('improve_diplomacy'),
  step(organize_peace_talks),
  step(implement_conflict_resolution_strategies),
  route(lambda result: 'address_economic_factors' if result['implement_conflict_resolution_strategies']['success'] else 'reassess_diplomatic_approach'),

  pin('reassess_diplomatic_approach'),

  ## ... more steps

  pin('promote_sustainable_development'),  step(implement_environmental_protection_measures),
  step(develop_renewable_energy_sources),

  pin('monitor_and_evaluate'),
  step(establish_global_peace_index),
  step(conduct_regular_peace_assessments),
  route(lambda result: 'analyze_current_global_conflicts' if result['conduct_regular_peace_assessments']['global_peace_score'] < 0.9 else 'maintain_peace'),

  pin('maintain_peace'),
  step(continue_peace_initiatives)
]
```

## plans that make plans

this example [docs/examples/system_two/planer.py](docs/examples/system_two/planer.py) takes it one step further and:

* given a problem
* it follows a plan
* to create a plan
* to solve the problem

this is an interesting area to improve on:
* create functions of runtime created plans
* _safely_ execute them
* create more plans
* and keep researching

but even at its current state it is capable of creating and refining (with a stronger model) plans to provide solid approaches to solve complex problems

the gist is:

```python
def plan_maker(problem: str):
    blueprint = plan([
        step(research_problem),   ## as part of research, goes online to supplement LLM knowledge with fresh results
        step(restate_problem),
        step(divide_problem),
        step(create_plan),

        pin('review'),
        step(review_plan),
        route(lambda result: 'refine' if result['review_plan']['needs_refinement'] else 'end'),

        pin('refine'),
        step(refine_plan),
        route(lambda _: 'review'),

        ## create functions
        ## execute plan
        ## validate go back

        pin('end')
    ])

    default_model = thinker.Ollama(model="llama3:70b")
    stronger_model = thinker.Claude(model="claude-3-5-sonnet-20240620")

    mind_map = {
        "review_plan": stronger_model
    }

    result = thinker.plan(blueprint,
                          llm=default_model,
                          mind_map=mind_map,
                          start_with={"problem": problem})
```

# license

Copyright © 2024 tolitius

Distributed under the Eclipse Public License either version 1.0 or (at
your option) any later version.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "42towels",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "ai, agents, llm, function calling, planning, api, code generation",
    "author": "tolitius",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/38/80/ab74db6d471b059b9b220a557f466ec1a915da0af01ff6d98a000be4bc59/42towels-0.1.1011.tar.gz",
    "platform": null,
    "description": "> _\"a **towel** is about the most massively useful thing an interstellar AI hitchhiker can have\"</br>\n> -- Douglas Adams_\n\n# towel <img src=\"docs/img/towel-logo.png\" width=\"75px\">\n![PyPI](https://img.shields.io/pypi/v/PACKAGE?label=pypi%2042towels)\n\ncompose LLM python functions into dynamic, self-modifying plans\n\n```python\nspace_trip = plan([\n\n    step(pick_planet),\n\n    pin('are_you_ready'),\n    step(how_ready_are_you),\n    route(lambda result: 'book' if result['how_ready_are_you']['score'] > 95 else 'train'),\n\n    pin('train'),\n    step(space_bootcamp),\n    route(lambda x: 'are_you_ready'),\n\n    pin('book'),\n    step(reserve_spaceship)\n])\n```\n\n- [why towel](#why-towel)\n- [features](#features)\n- [how to play](#how-to-play)\n  - [install](#install)\n    - [run examples](#run-examples)\n  - [LLMs with no towels](#llms-with-no-towels)\n    - [LLM libraries and frameworks are unnecessary](#llm-libraries-and-frameworks-are-unnecessary)\n    - [basics](#basics)\n    - [connect to LLM](#connect-to-llm)\n    - [ask LLM a question](#ask-llm-a-question)\n    - [stream responses from LLM](#stream-responses-from-llm)\n    - [use the built in instructor](#use-the-built-in-instructor)\n    - [using tools (a.k.a. function calling)](#using-tools-aka-function-calling)\n  - [towels](#towels)\n    - [@towel, thinker, intel, tow](#towel-thinker-intel-tow)\n    - [a more practical example](#a-more-practical-example)\n  - [plans](#plans)\n    - [vocabulary](#vocabulary)\n      - [step](#step)\n      - [pin](#pin)\n      - [route](#route)\n    - [flow and data](#flow-and-data)\n    - [executing a plan](#executing-a-plan)\n    - [mind maps](#mind-maps)\n    - [kick off intel](#kick-off-intel)\n  - [making plans](#making-plans)\n  - [plans that make plans](#plans-that-make-plans)\n- [license](#license)\n\n## why towel\n\nthe name comes from the Hitchhiker's Guide to the Galaxy</br>\nwhere [a towel is](https://en.wikipedia.org/wiki/Towel_Day) the most massively useful thing an interstellar AI hitchhiker can have.\n\nthis ultimate truth applies to all the universes including the one full of Large Language Models (a.k.a. LLMs)\n\nany Python function wrapped in a `@towel` becomes the most massively useful and unlocks the power of LLMs:\n\n```python\n@towel\ndef find_meaning_of_life():\n  llm, *_ = tow()             ## tows llm into this function.. and more\n  llm.think(\"... about it\")\n```\n\nsince this is just a function it can be composed with other LLM, or not, functions leaving it to just Python and you to create things.\n\nbut, in case help composing is needed, towel can assist with making plans that use @towels (i.e. these functions):\n\n```python\nplan([\n\n  step(find_meaning_of_life),\n  route(lambda result: 'conclude' if result['find_meaning_of_life']['confidence'] > 0.8 else 'test meaning'),\n\n  pin('test meaning'),\n  step(reality_check),\n  route(lambda x: 'find_meaning_of_life'),\n\n  pin('conclude')\n])\n```\n\n# features\n\n### plan it like you mean it!\n- more powerful than chains, simpler than graphs\n- self-modifying plans (plans that make plans)\n- simple vocabulary: \"`step`\", \"`route`\" and \"`pin`\" for any plan\n- mind maps: each step can have its very own LLM\n- dynamic routing based on pure functions and step results\n\n### functions over objects\n- function compose.. into elegant workflows\n- any function can become an LLM: just wrap it in a `@towel`\n\n### function calling / tool use\n- one interface for local models and cloud models\n- it's great, it's [pydantic](https://github.com/pydantic/pydantic)\n\n### strong LLM response typing\n- it's great, it's pydantic\n- built-in support for the [instructor](https://github.com/jxnl/instructor) library, enabling structured outputs\n- \"[DeepThought](https://github.com/tolitius/towel/blob/6caa70312a3715da7adae89149d6d1ab684a2c37/src/towel/brain/base.py#L8-L23)\" full with thoughts for non instructor responses\n\n### thread safety\n- dynamic `@towel` context handling\n- modify context at runtime \"`with intel(llm=llm)`\"\n\n### multi-model support\n- one \"`thinker`\" API for all\n- switch between different LLM providers (Claude, Ollama, etc.)\n- extensible to support additional providers via \"[Brain](https://github.com/tolitius/towel/blob/6caa70312a3715da7adae89149d6d1ab684a2c37/src/towel/brain/base.py#L25)\"\n\n### local models love\n- feature parity with cloud models via Ollama integration\n\n# how to play\n\nlet's look around and travel them universes one by one. towel by towel.\n\n## install\n\nthe \"[Answer to the Ultimate Question of Life, the Universe, and Everything](https://simple.wikipedia.org/wiki/42_(answer))\" is 42.\n\nhence in order to harness \"the power of the towel\" we need to install 42 of them:\n\n```bash\npip install 42towels\n```\n\n### run examples\n\nthere are examples in [docs/examples](docs/examples) that, after \"pip install 42towels\", can be run as:\n\n```python\n$ python ./docs/examples/function_caller.py -p anthropic -m claude-3-haiku-20240307\n```\nor\n```\n$ python ./docs/examples/function_caller.py -m llama3:70b  ## will use Ollama by default\n```\n\n## LLMs with no towels\n\n### LLM libraries and frameworks are unnecessary\n\nthe hidden truth of every LLM library or framework is that most of the time _**you don't need an LLM library or framework**_, because it all comes down to a simple sequence of two steps:\n\n* come up with a question (a.k.a. \"a prompt\"): e.g. \"what is the meaning of life? think step by step\"\n* call an HTTP API\n```bash\ncurl -X POST http://localhost:11434/api/generate -d '{\n  \"model\": \"llama3\",\n  \"prompt\": \"what is the meaning of life? think step by step\"\n}'\n```\nthat is pretty much it.</br>\nyou do it over and over again, it would be called a \"chat\"</br>\nyou do it with Generative Pre-trained Transformer models, this chat would be called \"chat gpt\" (i.e. you _own_ chat gpt).\n\n### basics\n\nwhere libraries can help is _consistency_ and _repeatability_ which really enhances and helps composing things, such as code things.\n\nin the world of LLMs most of these HTTP APIs and their capabilities are very inconsistent, which is why libraries such as [litellm](https://github.com/BerriAI/litellm) and others help a lot.\n\ntowel also aims to provide consistency across models, so it is important to understand the basics: simple ways to engage LLMs without `@towel`s or plans.\n\n### connect to LLM\n\n\"`thinker`\" is the one with the power to connect to LLMs\n\n```python\nfrom towel import thinker\n\n# would connect to Anthropic's Claude LLM\n# it would expect you to have an anthropic key exported in .env:\n# export ANTHROPIC_API_KEY=sk-an...\n\nllm = thinker.Claude(model=\"claude-3-haiku-20240307\")\n\n\n# would connect to any local model hosted by Ollama\n# it would expect you to have Ollama running at \"http://localhost:11434\"\n# but a different url can be passed in as well\n\nllm = thinker.Ollama(model=\"llama3:latest\")\n```\n\n### ask LLM a question\n\nwe'll take examples from [docs/examples/thinking.py](docs/examples/thinking.py)\n\n```python\nthoughts = llm.think(prompt=\"what is the meaning of life? think step by step\")\n```\n\nthinker would return a DeepThought:\n\n```python\n>>> type(thoughts)\n<class 'towel.brain.base.DeepThought'>\n```\n\nwhich would look like this:\n\n```python\nDeepThought(id='9a7a0f12-ffaa-96bd-aa5e-51362dd2c06e',\n            content=[TextThought(text='What a profound and complex question! The meaning of life is ...and wondrous journey called existence.',\n                                 type='text')],\n            tokens_used=820,\n            model='llama3:latest',\n            stop_reason='stop')\n```\n\nso if you are just after the text it can be extracted as:\n\n```python\n>>> thoughts.content[0].text\n'What a profound and complex questi... called existence.'\n```\n\n### stream responses from LLM\n\ntwo choices:\n\n\"manually\":\n```python\nfor chunk in llm.think(prompt=\"what is the meaning of life? think step by step\",\n                       stream=True):\n\n    print(chunk, end='', flush=True)\n```\n\nor with \"thinker\"'s help:\n```python\nfrom towel.tools import stream\n\nstream(llm.think(prompt=\"what is the meaning of life? think step by step\",\n                 stream=True),\n```\n\n### use the built in instructor\n\nLLMs are not very good at being.. consistent. This is great for creative writing, but not that great for relying on responses to be formatted in a particular way: schema, type, shape, etc..\n\nthis is where [instructor](https://github.com/jxnl/instructor) comes in helping to ensure LLM responses are strongly typed\n\ntowel has a built in instructor that can be engaged by passing a \"`response_model`\" argument with the desired typed output\n\nfor example:\n\n```python\nfrom pydantic import BaseModel\n\nclass MeaningOfLife(BaseModel):\n    meaning: str\n    confidence_level: float\n\nthoughts = llm.think(prompt=\"what is the meaning of life? think step by step\",\n                     response_model=MeaningOfLife)\n```\n\nnow thinker will rely on instructor to return the response as the MeaningOfLife type:\n\n```python\n>>> thoughts\nMeaningOfLife(meaning=\"I'll do my best to help you explore this existential question!...\",\n              confidence_level=7.5)\n```\n\n### using tools (a.k.a. function calling)\n\nquite a popular topic in LLM circles.\n\nthis capability is about asking LLM a question, and also providing it a list of well defined tools (functions) LLM can _decide_ to call instead of answering the question, based on its own knowledge.\n\none important aspect to understand is: LLM does _**not**_ call functions or tools, but merely responds with a tool name (or several) and its arguments.\n\nhere is an example.\n\nlet's say we have \"a tool\" that checks the weather (the most frequent tool that is used as an example on this topic):\n\n```python\ndef check_current_weather(location, unit=\"fahrenheit\"):\n    \"\"\"lookup the current weather in a given location\"\"\"\n    if \"tokyo\" in location.lower():\n        return json.dumps({\"location\": \"Tokyo\", \"temperature\": \"10\", \"unit\": unit})\n    elif \"new york\" in location.lower():\n        return json.dumps({\"location\": \"San Francisco\", \"temperature\": \"72\", \"unit\": unit})\n    elif \"paris\" in location.lower():\n        return json.dumps({\"location\": \"Paris\", \"temperature\": \"22\", \"unit\": unit})\n    else:\n        return json.dumps({\"location\": location, \"temperature\": \"unknown\"})\n```\n\nLLMs do not have an up-to date knowledge about the weather, hence if we ask an LLM \"what's the weather like in Tokyo?\", it would not know, and would usually respond what the weather in Tokiyo like at different times of year. but..\n\nthis is where tools come handy.\n\ndefine a tool/function schema:\n\n```python\ntools = [\n      {\n        \"name\": \"check_current_weather\",\n        \"description\": \"checks the current weather in a given location\",\n        \"input_schema\": {\n          \"type\": \"object\",\n          \"properties\": {\n            \"location\": {\n              \"type\": \"string\",\n              \"description\": \"The city and state, e.g. New York, NY\"\n            },\n            \"unit\": {\n              \"type\": \"string\",\n              \"enum\": [\"celsius\", \"fahrenheit\"],\n              \"description\": \"The unit of temperature to return, e.g. celsius or fahrenheit\"\n            }\n          },\n          \"required\": [\"location\"]\n        }\n      }]\n```\n\nand pass it to LLM:\n\n```python\nthoughts = llm.think(prompt=\"what's the weather like in Tokyo?\",\n                     tools=tools)\n```\n\n\"`thinker`\" would help a model to realize it does not \"know\" the answer to this question and would need to respond with the name of the tool to use and it arguments\n\nand.. it does:\n\n```python\n>>> thoughts\nDeepThought(id='9a9d2196-55b8-e252-8bfc-d9a82caaaf97',\n            content=[TextThought(text='I can check the current weather in Tokyo for you!',\n                                 type='text'),\n                     ToolUseThought(id='9a9d2196-55b8-e252-8bfc-d9a82caaaf97',\n                                    name='check_current_weather',\n                                    input={'location': 'Tokyo, Japan', 'unit': 'celsius'},\n                                    type='tool_use')],\n                     tokens_used=None,\n                     model='llama3:latest',\n                     stop_reason='tool_use')\n```\n\none thing to pay attention to is the \"`stop_reason`\", which, in case a model decided to use a tool, would be \"`tool_use`\"\n\n\"`thinker`\" has a helper \"`call_tools`\" function that can unwrap DeepThought and call tools:\n\n```python\n>>> thinker.call_tools(thoughts,\n...                    {\"check_current_weather\": check_current_weather})\n\ncalling tool: check_current_weather\n[{'tool_id': '9a9d2196-55b8-e252-8bfc-d9a82caaaf97',\n  'tool_name': 'check_current_weather',\n  'input': {'location': 'Tokyo, Japan',\n            'unit': 'celsius'},\n  'result': '{\"location\": \"Tokyo\",\n              \"temperature\": \"10\",\n              \"unit\": \"celsius\"}'}]\n```\n----\nthe utility of \"`thinker`\" in all the cases above is **one single API** that would work for local models as well as non local models such as Claude, etc.\n\n## towels\n\nwhile [this](#llm-libraries-and-frameworks-are-unnecessary) is still very true, an ability to express LLM communication in just functions vs. \"raw prompt + HTTP call\"s allows for breaking complex problems into smaller pieces, and converting what could otherwise be inconsistent, repetitive sequence of commands, into beautiful function compositions.\n\nlet's work step by step to take a single Python function and \"LLM enable\" it, giving it some warmth by wrapping it in a @towel.\n\nthis function expects a JSON formatted article that it will then convert to markdown format: i.e. a normal, every day, programming task:\n> _a full example lives in [docs/examples/wrap_it.py](docs/examples/wrap_it.py)._\n\n```python\ndef convert_json_to_markdown(article: str) -> str:\n    parsed = json.loads(article)\n    md = []\n    md.append(f\"# {article.get('title', 'Untitled')}\")\n    ## ...\n\n    return '\\n'.join(md)\n```\n\nthe problem is of course in corner cases, malformed JSON, adding / removing features, changing spelling, format, etc. as this function does not really generalize well for inputs it is unable to handle.\n\nlet's add some warmth to it: wrap it in a @towel:\n\n```python\nfrom towel import thinker, towel, tow, intel\n\n@towel(prompts={\"to_markdown\": \"convert this JSON {article} to markdown\"})\ndef convert_json_to_markdown(article: str) -> str:\n\n  llm, prompts, *_ = tow()\n  thought = llm.think(prompts['to_markdown'].format(article=article))\n\n  return thought.content[0].text\n```\n\nnow as this function is warm (wrapped in a @towel), let's give it a go:\n\n```python\nllm = thinker.Ollama(model=\"llama3:latest\")\n# or\n# llm = thinker.Claude(model=\"claude-3-haiku-20240307\")\n\nwith intel(llm=llm):\n    markdown = convert_json_to_markdown(json_article)\n\nprint(markdown)\n```\n\nand we see the exact same markdown that was produced by the first, non LLM, \"cold\" Python function.</br>\nan interesting aspect about this @towel function is that it _generalizes_: it can convert a lot more JSON formats, and handle a lot more corner cases.\n\nyou can check out and run [docs/examples/wrap_it.py](docs/examples/wrap_it.py) to experiment with both.\n\n### @towel, thinker, intel, tow\n\neeny, meeny, miny, moe..\n\nlooking at the example above it might not be obvious what \"`intel`\" and \"`tow`\" are doing.\n\n\"`intel`\" is a function that sets up a thread local context for this \"convert_json_to_markdown\" function run</br>\nand, in this case, it sets it up with an extra variable: \"`llm`\"\n\n```python\nwith intel(llm=llm):\n    markdown = convert_json_to_markdown(json_article)\n```\n\nwhich is later available inside this function via \"`tow()`\":\n```python\n@towel(prompts={\"to_markdown\": \"convert this JSON {article} to markdown\"})\ndef convert_json_to_markdown(article: str) -> str:\n\n  llm, prompts, *_ = tow()\n  ## ...\n```\n\nyou can notice that \"`tow()`\" also makes \"`prompts`\" accessible.\n\n\"`prompts`\" are, of course, optional and can be created inside the function, passed in, etc.</br>\nat the end this is just a function, so anything Python goes.\n\n### a more practical example\n\nyou can see a simple, but much more interesting example in [docs/examples/paper_summarizer.py](docs/examples/paper_summarizer.py)</br>\nwhere a single @towel takes a link to  white paper, pulls it down from the web and does these 3:\n\n```python\n@towel(prompts={'main points': 'summarize the main points in this paper',\n                'eli5':        'explain this paper like I\\'m 5 years old',\n                'issues':      'summarize issues that you can identify with ideas in this paper'})\ndef summarize_paper(url):\n  ## ....\n```\n> [!NOTE]\n_more examples in [docs/examples](docs/examples)_\n\n## plans\n\n* LLM communication with \"`thinker`\"</br>\nand\n* \"`@towel`\" function composition\n\nallow towel to empower an LLM, or a collection of LLMs, to _**plan**_ their activities given one or more problems to solve.\n\n### vocabulary\n\na \"plan\" is a sequence of steps, routes and pins:\n\n#### step\n\na step is a single executable unit of a plan. it sounds more than it really is since it is just an arbitrary function, in most cases a @towel function.\n\nexample:\n\n```python\nfrom towel import step\n\ndef find_meaning_of_life():\n  return {\"meaning_of_life\": 42}\n\n>>> step(find_meaning_of_life)\n<towel.guide.Step object at 0x1083e93d0>\n```\n\nby itself \"`step`\" is not very useful, but as a part of a \"plan\" it is essential.\n\n#### pin\n\na pin is a marker, or a checkpoint in a plan. it does not do anything besides having an addressable _name_:\n\n```python\nfrom towel import pin\n\n>>> pin(\"rock and roll\")\n<towel.guide.Pin object at 0x1083cc7d0>\n```\n\nit is later heavily used by \"route\", as well as it is really useful for debugging a plan flow.\n\n#### route\n\na route is a conditional unit of a plan. whenever plan flow gets to a route, it runs a condition (i.e. checks things) and depending on that check the flow can be routed to any \"pin\".\n\nin order to create a route, it needs to be given a function or lambda:\n\n```python\nfrom towel import route\n\n>>> route(lambda result: 'conclude' if result['find_meaning_of_life']['confidence'] > 0.8 else 'test meaning')\n<towel.guide.Route object at 0x108696e90>\n```\n\nwhich means:\n* go to a \"conclude\" pin (a pin with name \"conclude\") iff a result from the \"find_meaning_of_life\" has a \"confidence\" key with a value greater than \"0.8\"\n* otherwise go to \"test meaning\" (a pin with name \"test meaning\")\n\n### flow and data\n\nlet's look a the real plan that is a sequence of steps, pins and routes:\n\n> _a full example lives in [docs/examples/space_trip.py](docs/examples/space_trip.py)_\n\n```python\nfrom towel import step, route, pin, plan\n\nspace_trip = plan([\n\n    step(pick_planet),\n\n    pin('are_you_ready'),\n    step(how_ready_are_you),\n    route(lambda result: 'book' if result['how_ready_are_you']['score'] > 95 else 'train'),\n\n    pin('train'),\n    step(space_bootcamp),\n    route(lambda x: 'are_you_ready'),\n\n    pin('book'),\n    step(reserve_spaceship)\n])\n```\n\nit starts out with \"`step(pick_planet)`\" which would call a function `pick_planet`:\n\n```python\n@towel\ndef pick_planet():\n    ## ...\n    return {'destination': planets[choice].name}\n```\n\nnotice that this function returns \"`destination`\". internally towel would hold on to the result from this function, and would make it available for all other functions via _arguments_.\n\nthen the flow reaches \"`pin('are_you_ready')`\". it does nothing, as pins _do_ nothing.\n\nit then moves on to the \"`step(how_ready_are_you)`\" which calls a function `how_ready_are_you`:\n\n```python\n@towel\ndef how_ready_are_you(destination):\n    ## ...\n    return {'score': readiness.score}\n\n```\n\nnotice that nothing inside the plan definition is passing any arguments into `how_ready_are_you`, but it does take a \"`destination`\" argument.<br/>\nthis destination argument will be passed (by name) from the internal plan context that _remembers all the return values from all the steps_ and makes them available as function arguments.\n\nthe flow then looks at the route:\n\n```python\nroute(lambda result: 'book' if result['how_ready_are_you']['score'] > 95 else 'train')\n```\n\nwhich would:\n* route the flow to the \"`pin('book')`\" iff the \"`score`\" value of the `how_ready_are_you` step is larger than `95`\n* otherwise it would route to the \"`pin('train')`\"\n\nthe rest is of the flow uses the exact same concepts\n\n> [!TIP]\n> remember to return dictionaries from `@towel` functions that are part of the plan</br>\n> as the **_keys_** from those dictionaries then matched to other step function **_arguments_** </br>\n> and if it is a match values are bound / arguments are _passed_ by the flow\n\n### executing a plan\n\nas you can see in the example ([docs/examples/space_trip.py](docs/examples/space_trip.py)), a plan is executed by the \"`thinker.plan()`\" function:\n\n```python\nllm = thinker.Ollama(model=\"llama3:latest\")\n# llm = thinker.Claude(model=\"claude-3-haiku-20240307\")\n\ntrip = thinker.plan(space_trip,\n                    llm=llm)\n\nsay(\"trip is booked:\", f\"{json.dumps(trip['reserve_spaceship'], indent=2)}\")\n```\n\n### mind maps\n\nsince plans have many steps, it might be needed to perform some steps with LLMs that are better suited for it.\n\nby default a plan would execute all the steps with an LLM that was provided to it:\n\n```python\nblueprint = make_plan()\n\nthinker.plan(blueprint,\n             llm=llama)\n```\n\nin case some steps need to be done by different LLMs, a plan takes a \"`mind_map`\" argument:\n\n```python\nthinker.plan(blueprint,\n             llm=llama,\n             mind_map={\"review_stories\": claude},\n             start_with={\"requirements\": requirements})\n```\n\nall the steps in this plan are going to be performed by a \"llama\" model, but a \"review_stories\" step will be done by \"claude\"\n\nyou can look at the full example in [docs/examples/execute_da_plan.py](docs/examples/execute_da_plan.py)<br/>\nwhere a smaller \"`llama3 8B`\" takes requirements, creates user stories, but \"`claude`\" is the one who reviews these stories, and provides feedback:\n\n```python\n    return plan([\n\n        step(create_stories),\n\n        pin('review'),\n        step(review_stories),   ## <<< this step will be done by Claude\n        route(lambda result: 'revise' if result['review_stories']['quality_score'] < 0.8 else 'implement'),\n\n        pin('revise'),\n        step(revise_stories),\n        route(lambda x: 'review'),\n\n        pin('implement'),\n        step(implement_code)\n])\n```\n\n### kick off intel\n\nplan is usually kicked off with initial data: a problem definition or a question\n\nthis is done via a \"`start_with`\" plan argument:\n\n```python\nthinker.plan(blueprint,\n             llm=llama,\n             start_with={\"requirements\": requirements})\n```\n\nand \"`requirements`\" would most likely be a function argument name in the first step in this plan.\n\n## making plans\n\nplan's clear [vocabulary](#vocabulary) and the fact the plan itself is a data structure enables LLMs to take in a problem<br/>\n... and _create a plan_ to solve this problem:\n\n```python\nfrom towel import towel, tow\nfrom towel.type import Plan\nfrom towel.prompt import make_plan\n\n@towel(prompts={'plan': \"\"\"given this problem: {problem} {make_plan}\"\"\"})\ndef make_da_plan(problem: str):\n    llm, prompts, *_ = tow()\n    plan = llm.think(prompts['plan'].format(problem=problem,\n                                            make_plan=make_plan),\n                     response_model=Plan)\n    return plan\n```\n\nthis function takes a problem and creates a plan\n> full example is in [docs/examples/make_da_plan.py](docs/examples/make_da_plan.py)\n\nfor example, here is a plan Claude created to..\n\n```python\nllm = Claude(model=\"claude-3-haiku-20240307\")\n\nwith intel(llm=llm):\n    plan = make_da_plan(\"make sure there are no wars\")\n```\n```python\n[\n  step(analyze_current_global_conflicts),\n  step(identify_root_causes),\n  step(assess_diplomatic_relations),\n  route(lambda result: 'improve_diplomacy' if result['assess_diplomatic_relations']['status'] == 'poor' else 'address_economic_factors'),\n\n  pin('improve_diplomacy'),\n  step(organize_peace_talks),\n  step(implement_conflict_resolution_strategies),\n  route(lambda result: 'address_economic_factors' if result['implement_conflict_resolution_strategies']['success'] else 'reassess_diplomatic_approach'),\n\n  pin('reassess_diplomatic_approach'),\n\n  ## ... more steps\n\n  pin('promote_sustainable_development'),  step(implement_environmental_protection_measures),\n  step(develop_renewable_energy_sources),\n\n  pin('monitor_and_evaluate'),\n  step(establish_global_peace_index),\n  step(conduct_regular_peace_assessments),\n  route(lambda result: 'analyze_current_global_conflicts' if result['conduct_regular_peace_assessments']['global_peace_score'] < 0.9 else 'maintain_peace'),\n\n  pin('maintain_peace'),\n  step(continue_peace_initiatives)\n]\n```\n\n## plans that make plans\n\nthis example [docs/examples/system_two/planer.py](docs/examples/system_two/planer.py) takes it one step further and:\n\n* given a problem\n* it follows a plan\n* to create a plan\n* to solve the problem\n\nthis is an interesting area to improve on:\n* create functions of runtime created plans\n* _safely_ execute them\n* create more plans\n* and keep researching\n\nbut even at its current state it is capable of creating and refining (with a stronger model) plans to provide solid approaches to solve complex problems\n\nthe gist is:\n\n```python\ndef plan_maker(problem: str):\n    blueprint = plan([\n        step(research_problem),   ## as part of research, goes online to supplement LLM knowledge with fresh results\n        step(restate_problem),\n        step(divide_problem),\n        step(create_plan),\n\n        pin('review'),\n        step(review_plan),\n        route(lambda result: 'refine' if result['review_plan']['needs_refinement'] else 'end'),\n\n        pin('refine'),\n        step(refine_plan),\n        route(lambda _: 'review'),\n\n        ## create functions\n        ## execute plan\n        ## validate go back\n\n        pin('end')\n    ])\n\n    default_model = thinker.Ollama(model=\"llama3:70b\")\n    stronger_model = thinker.Claude(model=\"claude-3-5-sonnet-20240620\")\n\n    mind_map = {\n        \"review_plan\": stronger_model\n    }\n\n    result = thinker.plan(blueprint,\n                          llm=default_model,\n                          mind_map=mind_map,\n                          start_with={\"problem\": problem})\n```\n\n# license\n\nCopyright \u00a9 2024 tolitius\n\nDistributed under the Eclipse Public License either version 1.0 or (at\nyour option) any later version.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "an essential item for any intergalactic hitchhiker",
    "version": "0.1.1011",
    "project_urls": {
        "documentation": "https://github.com/tolitius/towel?tab=readme-ov-file#-towel",
        "homepage": "https://github.com/tolitius/towel",
        "issues": "https://github.com/tolitius/towel/issues",
        "repository": "https://github.com/tolitius/towel"
    },
    "split_keywords": [
        "ai",
        " agents",
        " llm",
        " function calling",
        " planning",
        " api",
        " code generation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1bb89402436906d5e30878466470af1c0c433e81716542d9e1c12e52354c9b7c",
                "md5": "dda03724d80cef139d8f1d9882d768f9",
                "sha256": "3282b2a8c490da588b58052001dc39c2a6825179cdbba25394933f2fe66a3c74"
            },
            "downloads": -1,
            "filename": "42towels-0.1.1011-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dda03724d80cef139d8f1d9882d768f9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 33507,
            "upload_time": "2024-07-20T18:19:51",
            "upload_time_iso_8601": "2024-07-20T18:19:51.445739Z",
            "url": "https://files.pythonhosted.org/packages/1b/b8/9402436906d5e30878466470af1c0c433e81716542d9e1c12e52354c9b7c/42towels-0.1.1011-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3880ab74db6d471b059b9b220a557f466ec1a915da0af01ff6d98a000be4bc59",
                "md5": "bf3f15891ca08f4b8f48bcd606fecb69",
                "sha256": "f9c033a54aa3fe1c97cd69e58eefde384fac045fa245f9d0a0769938e03a514a"
            },
            "downloads": -1,
            "filename": "42towels-0.1.1011.tar.gz",
            "has_sig": false,
            "md5_digest": "bf3f15891ca08f4b8f48bcd606fecb69",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 36902,
            "upload_time": "2024-07-20T18:19:53",
            "upload_time_iso_8601": "2024-07-20T18:19:53.002944Z",
            "url": "https://files.pythonhosted.org/packages/38/80/ab74db6d471b059b9b220a557f466ec1a915da0af01ff6d98a000be4bc59/42towels-0.1.1011.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-20 18:19:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tolitius",
    "github_project": "towel?tab=readme-ov-file#-towel",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "42towels"
}
        
Elapsed time: 1.20574s