lisette


Namelisette JSON
Version 0.0.12 PyPI version JSON
download
home_pagehttps://github.com/AnswerDotAI/lisette
Summarylitellm helper
upload_time2025-10-26 22:35:39
maintainerNone
docs_urlNone
authorAnswerDotAI
requires_python>=3.9
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Lisette


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

> **NB**: If you are reading this in GitHub’s readme, we recommend you
> instead read the much more nicely formatted [documentation
> format](https://lisette.answer.ai/) of this tutorial.

*Lisette* is a wrapper for the [LiteLLM Python
SDK](https://docs.litellm.ai/), which provides unified access to 100+
LLM providers using the OpenAI API format.

LiteLLM provides a unified interface to access multiple LLMs, but it’s
quite low level: it leaves the developer to do a lot of stuff manually.
Lisette automates pretty much everything that can be automated, whilst
providing full control. Amongst the features provided:

- A [`Chat`](https://lisette.answer.ai/core.html#chat) class that
  creates stateful dialogs across any LiteLLM-supported model
- Convenient message creation utilities for text, images, and mixed
  content
- Simple and convenient support for tool calling with automatic
  execution
- Built-in support for web search capabilities (including citations for
  supporting models)
- Streaming responses with formatting
- Full async support with
  [`AsyncChat`](https://lisette.answer.ai/core.html#asyncchat)
- Prompt caching (for supporting models)

To use Lisette, you’ll need to set the appropriate API keys as
environment variables for whichever LLM providers you want to use.

## Get started

LiteLLM will automatically be installed with Lisette, if you don’t
already have it.

``` python
!pip install lisette -qq
```

Lisette only exports the symbols that are needed to use the library, so
you can use import \* to import them. Here’s a quick example showing how
easy it is to switch between different LLM providers:

``` python
from lisette import *
```

## Chat

``` python
models = ['claude-sonnet-4-20250514', 'gemini/gemini-2.5-flash', 'openai/gpt-4o']

for model in models:
    chat = Chat(model)
    res = chat("Please tell me about yourself in one brief sentence.")
    display(res)
```

I’m Claude, an AI assistant created by Anthropic to be helpful,
harmless, and honest in conversations and tasks.

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=29, prompt_tokens=17, total_tokens=46, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

I am a large language model, trained by Google, designed to assist with
information and generate text.

<details>

- id: `chatcmpl-xxx`
- model: `gemini-2.5-flash`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=603, prompt_tokens=11, total_tokens=614, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=583, rejected_prediction_tokens=None, text_tokens=20), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=None, text_tokens=11, image_tokens=None))`

</details>

I’m an AI language model created by OpenAI, designed to assist with a
wide range of questions and tasks by providing information and
generating text-based responses.

<details>

- id: `chatcmpl-xxx`
- model: `gpt-4o-2024-08-06`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=30, prompt_tokens=17, total_tokens=47, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=0, cached_tokens=0, text_tokens=None, image_tokens=None))`

</details>

That’s it! Lisette handles all the provider-specific details
automatically. Each model will respond in its own style, but the
interface remains the same.

## Message formatting

### Multiple messages

Lisette accepts multiple messages in one go:

``` python
chat = Chat(models[0])
res = chat(['Hi! My favorite drink coffee.', 'Hello!', 'Whats my favorite drink?'])
display(res)
```

Hello! Based on what you just told me, your favorite drink is coffee! ☕

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=22, prompt_tokens=23, total_tokens=45, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

If you have a pre-existing message history, you can also pass it when
you create the [`Chat`](https://lisette.answer.ai/core.html#chat)
object:

``` python
chat = Chat(models[0],hist=['Hi! My favorite drink is coffee.', 'Hello!'])
res = chat('Whats my favorite drink?')
display(res)
```

Your favorite drink is coffee! You just mentioned that in your previous
message.

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=18, prompt_tokens=30, total_tokens=48, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

### Images

Lisette also makes it easy to include images in your prompts:

``` python
from pathlib import Path
from IPython.display import Image
```

``` python
fn = Path('samples/puppy.jpg')
img = fn.read_bytes()
Image(img)
```

![](index_files/figure-commonmark/cell-8-output-1.jpeg)

All you have to do is read it in as bytes:

``` python
img[:20]
```

    b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00'

And you can pass it inside a
[`Chat`](https://lisette.answer.ai/core.html#chat) object:

``` python
chat = Chat(models[0])
chat([img, "What's in this image? Be brief."])
```

A cute puppy with brown and white fur lying on grass next to purple
flowers.

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=20, prompt_tokens=108, total_tokens=128, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

### Prefill

Some providers (e.g. Anthropic) support `prefill`, allowing you to
specify how the assistant’s response should begin:”

``` python
chat = Chat(models[0])
chat("Concisely, what's the meaning of life?", prefill="According to Douglas Adams,")
```

According to Douglas Adams,it’s 42.

More seriously, there’s no universal answer. Common perspectives
include: - Creating meaning through relationships, growth, and
contribution - Fulfilling a divine purpose or spiritual calling -
Maximizing well-being and minimizing suffering - Leaving a positive
legacy - Simply experiencing and appreciating existence itself

The meaning might be something you create rather than discover.

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=84, prompt_tokens=24, total_tokens=108, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

## Tools

Lisette makes it easy to give LLMs access to Python functions. Just
define a function with type hints and a docstring:

``` python
def add_numbers(
    a: int,  # First number to add
    b: int   # Second number to add  
) -> int:
    "Add two numbers together"
    return a + b
```

Now pass the function to
[`Chat`](https://lisette.answer.ai/core.html#chat) and the model can use
it automatically:

``` python
chat = Chat(models[0], tools=[add_numbers])
res = chat("What's 47 + 23? Use the tool.")
res
```

The result of 47 + 23 is 70.

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=17, prompt_tokens=533, total_tokens=550, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

If you want to see all intermediate messages and outputs you can use the
`return_all=True` feature.

``` python
chat = Chat(models[0], tools=[add_numbers])
res = chat("What's 47 + 23 + 59? Use the tool.",max_steps=3,return_all=True)
display(*res)
```

I’ll help you calculate 47 + 23 + 59 using the add_numbers tool. Since
the tool can only add two numbers at a time, I’ll need to do this in two
steps.

🔧 add_numbers({“a”: 47, “b”: 23})

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `tool_calls`
- usage:
  `Usage(completion_tokens=116, prompt_tokens=433, total_tokens=549, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

    {'tool_call_id': 'toolu_01F9oakoP8ANHkTMD1DyQDi7',
     'role': 'tool',
     'name': 'add_numbers',
     'content': '70'}

Now I’ll add the result (70) to the third number (59):

🔧 add_numbers({“a”: 70, “b”: 59})

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `tool_calls`
- usage:
  `Usage(completion_tokens=87, prompt_tokens=562, total_tokens=649, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

    {'tool_call_id': 'toolu_01Cdf3FHJdbx64F8H8ooE1Db',
     'role': 'tool',
     'name': 'add_numbers',
     'content': '129'}

The answer is 129. So 47 + 23 + 59 = 129.

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=25, prompt_tokens=662, total_tokens=687, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

It shows the intermediate tool calls, and the tool results!

## Web search

Some models support web search capabilities. Lisette makes this easy to
use:

``` python
chat = Chat(models[0], search='l')  # 'l'ow, 'm'edium, or 'h'igh search context
res = chat("Please tell me one fun fact about otters. Keep it brief")
res
```

Here’s a fun fact about otters: Sea otters allow themselves to get
entangled in kelp forests - this creates a tether so they don’t drift
away on sleep currents as they sleep. They essentially use kelp as a
natural anchor to stay in place while floating and resting on the
water’s surface!

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=143, prompt_tokens=15626, total_tokens=15769, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), server_tool_use=ServerToolUse(web_search_requests=1), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

> [!TIP]
>
> Some providers (like Anthropic) provide citations for their search
> results.

``` python
res.choices[0].message.provider_specific_fields
```

    {'citations': [[{'type': 'web_search_result_location',
        'cited_text': 'Sea Otters allow themselves to get entangled in kelp forests this creates a tether so they don’t drift away on sleep currents as they sleep. ',
        'url': 'https://www.mygreenworld.org/blog/facts-about-otters',
        'title': 'Five Fast Facts about Otters — My Green World',
        'encrypted_index': 'EpABCioIBxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDCMi/kxdYrQXVUX+ZxoMVvW3BHE29cyMhwAFIjBZEBw3PaH+XAslsXWMNucD7FqSwe5Fnnsfh2RzTX9x/q9XQ1Mm1Ke6JOreehNzVI0qFDkJYT4NCX8U4CjHHwoyLKtY66vhGAQ='}]],
     'thinking_blocks': None}

## Streaming

For real-time responses, use `stream=True` to get chunks as they’re
generated rather than waiting for the complete response:

``` python
chat = Chat(models[0])
res_gen = chat("Concisely, what are the top 10 biggest animals?", stream=True)
res_gen
```

    <generator object Chat._call>

``` python
from litellm import ModelResponse, ModelResponseStream
```

You can loop over the generator to get the partial responses:

``` python
for chunk in res_gen:
    if isinstance(chunk,ModelResponseStream): print(chunk.choices[0].delta.content,end='')
```

    Here are the top 10 biggest animals by size/weight:

    1. **Blue whale** - largest animal ever, up to 100 feet long
    2. **Fin whale** - second-largest whale, up to 85 feet
    3. **Bowhead whale** - up to 65 feet, very heavy build
    4. **Right whale** - up to 60 feet, extremely bulky
    5. **Sperm whale** - up to 67 feet, largest toothed whale
    6. **Gray whale** - up to 50 feet
    7. **Humpback whale** - up to 52 feet
    8. **African elephant** - largest land animal, up to 13 feet tall
    9. **Colossal squid** - up to 46 feet long (largest invertebrate)
    10. **Giraffe** - tallest animal, up to 18 feet tall

    *Note: Various whale species dominate due to the ocean's ability to support massive body sizes.*None

And the final chunk is the complete `ModelResponse`:

``` python
chunk
```

Here are the top 10 biggest animals by size/weight:

1.  **Blue whale** - largest animal ever, up to 100 feet long
2.  **Fin whale** - second-largest whale, up to 85 feet
3.  **Bowhead whale** - up to 65 feet, very heavy build
4.  **Right whale** - up to 60 feet, extremely bulky
5.  **Sperm whale** - up to 67 feet, largest toothed whale
6.  **Gray whale** - up to 50 feet
7.  **Humpback whale** - up to 52 feet
8.  **African elephant** - largest land animal, up to 13 feet tall
9.  **Colossal squid** - up to 46 feet long (largest invertebrate)
10. **Giraffe** - tallest animal, up to 18 feet tall

*Note: Various whale species dominate due to the ocean’s ability to
support massive body sizes.*

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=233, prompt_tokens=22, total_tokens=255, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=None), prompt_tokens_details=None)`

</details>

## Async

For web applications and concurrent operations, like in
[FastHTML](https://fastht.ml), we recommend using
[`AsyncChat`](https://lisette.answer.ai/core.html#asyncchat):

``` python
chat = AsyncChat(models[0])
await chat("Hi there")
```

Hello! How are you doing today? Is there anything I can help you with?

<details>

- id: `chatcmpl-xxx`
- model: `claude-sonnet-4-20250514`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=20, prompt_tokens=9, total_tokens=29, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`

</details>

To wrap up, we’ll show an example of async + streaming + toolcalling +
search:

``` python
chat = AsyncChat(models[0], search='l', tools=[add_numbers])
res = await chat("""\
Search the web for the avg weight, in kgs, of male African and Asian elephants. Then add the two.
Keep your replies ultra concise! Dont search the web more than once please.
""", max_steps=4, stream=True)
await adisplay_stream(res)  # this is a convenience function to make async streaming look great in notebooks!
```

Based on the search results:

**Male African elephants**:
[\*](https://www.africa-safaris.com/How-Much-Does-An-Elephant-Weigh "How Much Does An Elephant Weigh")
[\*](https://www.quora.com/What-is-the-average-weight-of-an-adult-African-elephant-in-pounds-and-tons "What is the average weight of an adult African elephant in pounds and tons? - Quora")
Average weight is 5,000 kg (11,000 pounds)

**Male Asian elephants**:
[\*](https://www.ifaw.org/international/journal/difference-african-asian-elephants "African Elephants vs. Asian Elephants | IFAW")
[\*](https://www.ifaw.org/international/journal/difference-african-asian-elephants "African Elephants vs. Asian Elephants | IFAW")
Average weight is 3,600 kg (7,900 pounds)
<details class="tool-usage-details">

`add_numbers({"a": 5000, "b": 3600})` - `8600`

</details>

**Total**: 8,600 kg

## Next steps

Ready to dive deeper?

- Check out the rest of the
  [documentation](https://lisette.answer.ai/core.html).
- Visit the [GitHub repository](https://github.com/answerdotai/lisette)
  to contribute or report issues.
- Join our [Discord community](https://discord.gg/y7cDEX7r)!

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AnswerDotAI/lisette",
    "name": "lisette",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "nbdev jupyter notebook python",
    "author": "AnswerDotAI",
    "author_email": "support@answer.ai",
    "download_url": "https://files.pythonhosted.org/packages/80/a8/331f6f1b892eb33b56af73d1fc2bd769cc311c617551bb8c26bff8d6ac5a/lisette-0.0.12.tar.gz",
    "platform": null,
    "description": "# Lisette\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n> **NB**: If you are reading this in GitHub\u2019s readme, we recommend you\n> instead read the much more nicely formatted [documentation\n> format](https://lisette.answer.ai/) of this tutorial.\n\n*Lisette* is a wrapper for the [LiteLLM Python\nSDK](https://docs.litellm.ai/), which provides unified access to 100+\nLLM providers using the OpenAI API format.\n\nLiteLLM provides a unified interface to access multiple LLMs, but it\u2019s\nquite low level: it leaves the developer to do a lot of stuff manually.\nLisette automates pretty much everything that can be automated, whilst\nproviding full control. Amongst the features provided:\n\n- A [`Chat`](https://lisette.answer.ai/core.html#chat) class that\n  creates stateful dialogs across any LiteLLM-supported model\n- Convenient message creation utilities for text, images, and mixed\n  content\n- Simple and convenient support for tool calling with automatic\n  execution\n- Built-in support for web search capabilities (including citations for\n  supporting models)\n- Streaming responses with formatting\n- Full async support with\n  [`AsyncChat`](https://lisette.answer.ai/core.html#asyncchat)\n- Prompt caching (for supporting models)\n\nTo use Lisette, you\u2019ll need to set the appropriate API keys as\nenvironment variables for whichever LLM providers you want to use.\n\n## Get started\n\nLiteLLM will automatically be installed with Lisette, if you don\u2019t\nalready have it.\n\n``` python\n!pip install lisette -qq\n```\n\nLisette only exports the symbols that are needed to use the library, so\nyou can use import \\* to import them. Here\u2019s a quick example showing how\neasy it is to switch between different LLM providers:\n\n``` python\nfrom lisette import *\n```\n\n## Chat\n\n``` python\nmodels = ['claude-sonnet-4-20250514', 'gemini/gemini-2.5-flash', 'openai/gpt-4o']\n\nfor model in models:\n    chat = Chat(model)\n    res = chat(\"Please tell me about yourself in one brief sentence.\")\n    display(res)\n```\n\nI\u2019m Claude, an AI assistant created by Anthropic to be helpful,\nharmless, and honest in conversations and tasks.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=29, prompt_tokens=17, total_tokens=46, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\nI am a large language model, trained by Google, designed to assist with\ninformation and generate text.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `gemini-2.5-flash`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=603, prompt_tokens=11, total_tokens=614, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=583, rejected_prediction_tokens=None, text_tokens=20), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=None, text_tokens=11, image_tokens=None))`\n\n</details>\n\nI\u2019m an AI language model created by OpenAI, designed to assist with a\nwide range of questions and tasks by providing information and\ngenerating text-based responses.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `gpt-4o-2024-08-06`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=30, prompt_tokens=17, total_tokens=47, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=0, cached_tokens=0, text_tokens=None, image_tokens=None))`\n\n</details>\n\nThat\u2019s it! Lisette handles all the provider-specific details\nautomatically. Each model will respond in its own style, but the\ninterface remains the same.\n\n## Message formatting\n\n### Multiple messages\n\nLisette accepts multiple messages in one go:\n\n``` python\nchat = Chat(models[0])\nres = chat(['Hi! My favorite drink coffee.', 'Hello!', 'Whats my favorite drink?'])\ndisplay(res)\n```\n\nHello! Based on what you just told me, your favorite drink is coffee! \u2615\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=22, prompt_tokens=23, total_tokens=45, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\nIf you have a pre-existing message history, you can also pass it when\nyou create the [`Chat`](https://lisette.answer.ai/core.html#chat)\nobject:\n\n``` python\nchat = Chat(models[0],hist=['Hi! My favorite drink is coffee.', 'Hello!'])\nres = chat('Whats my favorite drink?')\ndisplay(res)\n```\n\nYour favorite drink is coffee! You just mentioned that in your previous\nmessage.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=18, prompt_tokens=30, total_tokens=48, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\n### Images\n\nLisette also makes it easy to include images in your prompts:\n\n``` python\nfrom pathlib import Path\nfrom IPython.display import Image\n```\n\n``` python\nfn = Path('samples/puppy.jpg')\nimg = fn.read_bytes()\nImage(img)\n```\n\n![](index_files/figure-commonmark/cell-8-output-1.jpeg)\n\nAll you have to do is read it in as bytes:\n\n``` python\nimg[:20]\n```\n\n    b'\\xff\\xd8\\xff\\xe0\\x00\\x10JFIF\\x00\\x01\\x01\\x00\\x00\\x01\\x00\\x01\\x00\\x00'\n\nAnd you can pass it inside a\n[`Chat`](https://lisette.answer.ai/core.html#chat) object:\n\n``` python\nchat = Chat(models[0])\nchat([img, \"What's in this image? Be brief.\"])\n```\n\nA cute puppy with brown and white fur lying on grass next to purple\nflowers.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=20, prompt_tokens=108, total_tokens=128, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\n### Prefill\n\nSome providers (e.g.\u00a0Anthropic) support `prefill`, allowing you to\nspecify how the assistant\u2019s response should begin:\u201d\n\n``` python\nchat = Chat(models[0])\nchat(\"Concisely, what's the meaning of life?\", prefill=\"According to Douglas Adams,\")\n```\n\nAccording to Douglas Adams,it\u2019s 42.\n\nMore seriously, there\u2019s no universal answer. Common perspectives\ninclude: - Creating meaning through relationships, growth, and\ncontribution - Fulfilling a divine purpose or spiritual calling -\nMaximizing well-being and minimizing suffering - Leaving a positive\nlegacy - Simply experiencing and appreciating existence itself\n\nThe meaning might be something you create rather than discover.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=84, prompt_tokens=24, total_tokens=108, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\n## Tools\n\nLisette makes it easy to give LLMs access to Python functions. Just\ndefine a function with type hints and a docstring:\n\n``` python\ndef add_numbers(\n    a: int,  # First number to add\n    b: int   # Second number to add  \n) -> int:\n    \"Add two numbers together\"\n    return a + b\n```\n\nNow pass the function to\n[`Chat`](https://lisette.answer.ai/core.html#chat) and the model can use\nit automatically:\n\n``` python\nchat = Chat(models[0], tools=[add_numbers])\nres = chat(\"What's 47 + 23? Use the tool.\")\nres\n```\n\nThe result of 47 + 23 is 70.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=17, prompt_tokens=533, total_tokens=550, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\nIf you want to see all intermediate messages and outputs you can use the\n`return_all=True` feature.\n\n``` python\nchat = Chat(models[0], tools=[add_numbers])\nres = chat(\"What's 47 + 23 + 59? Use the tool.\",max_steps=3,return_all=True)\ndisplay(*res)\n```\n\nI\u2019ll help you calculate 47 + 23 + 59 using the add_numbers tool. Since\nthe tool can only add two numbers at a time, I\u2019ll need to do this in two\nsteps.\n\n\ud83d\udd27 add_numbers({\u201ca\u201d: 47, \u201cb\u201d: 23})\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `tool_calls`\n- usage:\n  `Usage(completion_tokens=116, prompt_tokens=433, total_tokens=549, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\n    {'tool_call_id': 'toolu_01F9oakoP8ANHkTMD1DyQDi7',\n     'role': 'tool',\n     'name': 'add_numbers',\n     'content': '70'}\n\nNow I\u2019ll add the result (70) to the third number (59):\n\n\ud83d\udd27 add_numbers({\u201ca\u201d: 70, \u201cb\u201d: 59})\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `tool_calls`\n- usage:\n  `Usage(completion_tokens=87, prompt_tokens=562, total_tokens=649, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\n    {'tool_call_id': 'toolu_01Cdf3FHJdbx64F8H8ooE1Db',\n     'role': 'tool',\n     'name': 'add_numbers',\n     'content': '129'}\n\nThe answer is 129. So 47 + 23 + 59 = 129.\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=25, prompt_tokens=662, total_tokens=687, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\nIt shows the intermediate tool calls, and the tool results!\n\n## Web search\n\nSome models support web search capabilities. Lisette makes this easy to\nuse:\n\n``` python\nchat = Chat(models[0], search='l')  # 'l'ow, 'm'edium, or 'h'igh search context\nres = chat(\"Please tell me one fun fact about otters. Keep it brief\")\nres\n```\n\nHere\u2019s a fun fact about otters: Sea otters allow themselves to get\nentangled in kelp forests - this creates a tether so they don\u2019t drift\naway on sleep currents as they sleep. They essentially use kelp as a\nnatural anchor to stay in place while floating and resting on the\nwater\u2019s surface!\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=143, prompt_tokens=15626, total_tokens=15769, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), server_tool_use=ServerToolUse(web_search_requests=1), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\n> [!TIP]\n>\n> Some providers (like Anthropic) provide citations for their search\n> results.\n\n``` python\nres.choices[0].message.provider_specific_fields\n```\n\n    {'citations': [[{'type': 'web_search_result_location',\n        'cited_text': 'Sea Otters allow themselves to get entangled in kelp forests this creates a tether so they don\u2019t drift away on sleep currents as they sleep. ',\n        'url': 'https://www.mygreenworld.org/blog/facts-about-otters',\n        'title': 'Five Fast Facts about Otters \u2014 My Green World',\n        'encrypted_index': 'EpABCioIBxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDCMi/kxdYrQXVUX+ZxoMVvW3BHE29cyMhwAFIjBZEBw3PaH+XAslsXWMNucD7FqSwe5Fnnsfh2RzTX9x/q9XQ1Mm1Ke6JOreehNzVI0qFDkJYT4NCX8U4CjHHwoyLKtY66vhGAQ='}]],\n     'thinking_blocks': None}\n\n## Streaming\n\nFor real-time responses, use `stream=True` to get chunks as they\u2019re\ngenerated rather than waiting for the complete response:\n\n``` python\nchat = Chat(models[0])\nres_gen = chat(\"Concisely, what are the top 10 biggest animals?\", stream=True)\nres_gen\n```\n\n    <generator object Chat._call>\n\n``` python\nfrom litellm import ModelResponse, ModelResponseStream\n```\n\nYou can loop over the generator to get the partial responses:\n\n``` python\nfor chunk in res_gen:\n    if isinstance(chunk,ModelResponseStream): print(chunk.choices[0].delta.content,end='')\n```\n\n    Here are the top 10 biggest animals by size/weight:\n\n    1. **Blue whale** - largest animal ever, up to 100 feet long\n    2. **Fin whale** - second-largest whale, up to 85 feet\n    3. **Bowhead whale** - up to 65 feet, very heavy build\n    4. **Right whale** - up to 60 feet, extremely bulky\n    5. **Sperm whale** - up to 67 feet, largest toothed whale\n    6. **Gray whale** - up to 50 feet\n    7. **Humpback whale** - up to 52 feet\n    8. **African elephant** - largest land animal, up to 13 feet tall\n    9. **Colossal squid** - up to 46 feet long (largest invertebrate)\n    10. **Giraffe** - tallest animal, up to 18 feet tall\n\n    *Note: Various whale species dominate due to the ocean's ability to support massive body sizes.*None\n\nAnd the final chunk is the complete `ModelResponse`:\n\n``` python\nchunk\n```\n\nHere are the top 10 biggest animals by size/weight:\n\n1.  **Blue whale** - largest animal ever, up to 100 feet long\n2.  **Fin whale** - second-largest whale, up to 85 feet\n3.  **Bowhead whale** - up to 65 feet, very heavy build\n4.  **Right whale** - up to 60 feet, extremely bulky\n5.  **Sperm whale** - up to 67 feet, largest toothed whale\n6.  **Gray whale** - up to 50 feet\n7.  **Humpback whale** - up to 52 feet\n8.  **African elephant** - largest land animal, up to 13 feet tall\n9.  **Colossal squid** - up to 46 feet long (largest invertebrate)\n10. **Giraffe** - tallest animal, up to 18 feet tall\n\n*Note: Various whale species dominate due to the ocean\u2019s ability to\nsupport massive body sizes.*\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=233, prompt_tokens=22, total_tokens=255, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=None), prompt_tokens_details=None)`\n\n</details>\n\n## Async\n\nFor web applications and concurrent operations, like in\n[FastHTML](https://fastht.ml), we recommend using\n[`AsyncChat`](https://lisette.answer.ai/core.html#asyncchat):\n\n``` python\nchat = AsyncChat(models[0])\nawait chat(\"Hi there\")\n```\n\nHello! How are you doing today? Is there anything I can help you with?\n\n<details>\n\n- id: `chatcmpl-xxx`\n- model: `claude-sonnet-4-20250514`\n- finish_reason: `stop`\n- usage:\n  `Usage(completion_tokens=20, prompt_tokens=9, total_tokens=29, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)`\n\n</details>\n\nTo wrap up, we\u2019ll show an example of async + streaming + toolcalling +\nsearch:\n\n``` python\nchat = AsyncChat(models[0], search='l', tools=[add_numbers])\nres = await chat(\"\"\"\\\nSearch the web for the avg weight, in kgs, of male African and Asian elephants. Then add the two.\nKeep your replies ultra concise! Dont search the web more than once please.\n\"\"\", max_steps=4, stream=True)\nawait adisplay_stream(res)  # this is a convenience function to make async streaming look great in notebooks!\n```\n\nBased on the search results:\n\n**Male African elephants**:\n[\\*](https://www.africa-safaris.com/How-Much-Does-An-Elephant-Weigh \"How Much Does An Elephant Weigh\")\n[\\*](https://www.quora.com/What-is-the-average-weight-of-an-adult-African-elephant-in-pounds-and-tons \"What is the average weight of an adult African elephant in pounds and tons? - Quora\")\nAverage weight is 5,000 kg (11,000 pounds)\n\n**Male Asian elephants**:\n[\\*](https://www.ifaw.org/international/journal/difference-african-asian-elephants \"African Elephants vs. Asian Elephants | IFAW\")\n[\\*](https://www.ifaw.org/international/journal/difference-african-asian-elephants \"African Elephants vs. Asian Elephants | IFAW\")\nAverage weight is 3,600 kg (7,900 pounds)\n<details class=\"tool-usage-details\">\n\n`add_numbers({\"a\": 5000, \"b\": 3600})` - `8600`\n\n</details>\n\n**Total**: 8,600 kg\n\n## Next steps\n\nReady to dive deeper?\n\n- Check out the rest of the\n  [documentation](https://lisette.answer.ai/core.html).\n- Visit the [GitHub repository](https://github.com/answerdotai/lisette)\n  to contribute or report issues.\n- Join our [Discord community](https://discord.gg/y7cDEX7r)!\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "litellm helper",
    "version": "0.0.12",
    "project_urls": {
        "Homepage": "https://github.com/AnswerDotAI/lisette"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fd2e200030855bb098968727b47c5e91b7540d21cdfab8c7a12362809a7fb48c",
                "md5": "9782a2a5dd938ca178305ff2238bb0b0",
                "sha256": "57190d4d21a2c0d7de595cde5fe8c93003eac8266702e786455d071a69daf0dd"
            },
            "downloads": -1,
            "filename": "lisette-0.0.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9782a2a5dd938ca178305ff2238bb0b0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 18733,
            "upload_time": "2025-10-26T22:35:38",
            "upload_time_iso_8601": "2025-10-26T22:35:38.453109Z",
            "url": "https://files.pythonhosted.org/packages/fd/2e/200030855bb098968727b47c5e91b7540d21cdfab8c7a12362809a7fb48c/lisette-0.0.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "80a8331f6f1b892eb33b56af73d1fc2bd769cc311c617551bb8c26bff8d6ac5a",
                "md5": "b8d4119ad7ca59ccdd6f16303e73b400",
                "sha256": "d8af5869b04c51c3d0d85eb6bf1014905ee2f458c9c5a48c181c46aee6124aa7"
            },
            "downloads": -1,
            "filename": "lisette-0.0.12.tar.gz",
            "has_sig": false,
            "md5_digest": "b8d4119ad7ca59ccdd6f16303e73b400",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 25124,
            "upload_time": "2025-10-26T22:35:39",
            "upload_time_iso_8601": "2025-10-26T22:35:39.707492Z",
            "url": "https://files.pythonhosted.org/packages/80/a8/331f6f1b892eb33b56af73d1fc2bd769cc311c617551bb8c26bff8d6ac5a/lisette-0.0.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-26 22:35:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AnswerDotAI",
    "github_project": "lisette",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "lisette"
}
        
Elapsed time: 1.43560s