cosette


Namecosette JSON
Version 0.2.3 PyPI version JSON
download
home_pagehttps://github.com/AnswerDotAI/cosette
SummaryA helper for using the OpenAI API
upload_time2025-08-09 23:20:21
maintainerNone
docs_urlNone
authorJeremy Howard
requires_python>=3.9
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # cosette


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Install

``` sh
pip install cosette
```

## Getting started

OpenAI’s Python SDK will automatically be installed with Cosette, if you
don’t already have it.

``` python
from cosette import *
```

Cosette only exports the symbols that are needed to use the library, so
you can use `import *` to import them. Alternatively, just use:

``` python
import cosette
```

…and then add the prefix `cosette.` to any usages of the module.

Cosette provides `models`, which is a list of models currently available
from the SDK.

``` python
' '.join(models)
```

    'o1-preview o1-mini gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-32k gpt-3.5-turbo gpt-3.5-turbo-instruct o1 o3-mini chatgpt-4o-latest o1-pro o3 o4-mini gpt-4.1 gpt-4.1-mini gpt-4.1-nano'

For these examples, we’ll use GPT-4.1.

``` python
model = 'gpt-4.1'
```

## Chat

The main interface to Cosette is the
[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) class,
which provides a stateful interface to the models:

``` python
chat = Chat(model, sp="""You are a helpful and concise assistant.""")
chat("I'm Jeremy")
```

Hi Jeremy! How can I help you today?

<details>

- id: chatcmpl-BjwyifaV82goo6WYIeEORBGDMLCSA
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘Hi Jeremy! How can I help you
  today?’, refusal=None, role=‘assistant’, annotations=\[\], audio=None,
  function_call=None, tool_calls=None))\]
- created: 1750291172
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_51e1070cf2
- usage: CompletionUsage(completion_tokens=10, prompt_tokens=21,
  total_tokens=31,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

</details>

``` python
r = chat("What's my name?")
r
```

Your name is Jeremy. How can I assist you, Jeremy?

<details>

- id: chatcmpl-BjwyjN4t2wKzVWBRVWhD6buZF8y07
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘Your name is Jeremy. How can I
  assist you, Jeremy?’, refusal=None, role=‘assistant’,
  annotations=\[\], audio=None, function_call=None, tool_calls=None))\]
- created: 1750291173
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_b3f1157249
- usage: CompletionUsage(completion_tokens=13, prompt_tokens=43,
  total_tokens=56,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

</details>

As you see above, displaying the results of a call in a notebook shows
just the message contents, with the other details hidden behind a
collapsible section. Alternatively you can `print` the details:

``` python
print(r)
```

    ChatCompletion(id='chatcmpl-BjwyjN4t2wKzVWBRVWhD6buZF8y07', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Your name is Jeremy. How can I assist you, Jeremy?', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))], created=1750291173, model='gpt-4.1-2025-04-14', object='chat.completion', service_tier='default', system_fingerprint='fp_b3f1157249', usage=In: 43; Out: 13; Total: 56)

You can use `stream=True` to stream the results as soon as they arrive
(although you will only see the gradual generation if you execute the
notebook yourself, of course!)

``` python
for o in chat("What's your name?", stream=True): print(o, end='')
```

    I’m an AI assistant created by OpenAI, and you can just call me Assistant! If you’d like to give me a nickname, feel free—what would you like to call me?

## Model Capabilities

Different OpenAI models have different capabilities. Some models such as
o1-mini do not have support for streaming, system prompts, or
temperature. Query these capbilities using these functions:

``` python
# o1 does not support streaming or setting the temperature
can_stream('o1'), can_set_system_prompt('o1'), can_set_temperature('o1')
```

    (True, True, False)

``` python
# gpt-4o has these capabilities
can_stream('gpt-4o'), can_set_system_prompt('gpt-4o'), can_set_temperature('gpt-4o')
```

    (True, True, True)

## Tool use

[Tool use](https://docs.openai.com/claude/docs/tool-use) lets the model
use external tools.

We use [docments](https://fastcore.fast.ai/docments.html) to make
defining Python functions as ergonomic as possible. Each parameter (and
the return value) should have a type, and a docments comment with the
description of what it is. As an example we’ll write a simple function
that adds numbers together, and will tell us when it’s being called:

``` python
def sums(
    a:int,  # First thing to sum
    b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
    "Adds a + b."
    print(f"Finding the sum of {a} and {b}")
    return a + b
```

Sometimes the model will say something like “according to the `sums`
tool the answer is” – generally we’d rather it just tells the user the
answer, so we can use a system prompt to help with this:

``` python
sp = "Never mention what tools you use."
```

We’ll get the model to add up some long numbers:

``` python
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
pr
```

    'What is 604542+6458932?'

To use tools, pass a list of them to
[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat):

``` python
chat = Chat(model, sp=sp, tools=[sums])
```

Now when we call that with our prompt, the model doesn’t return the
answer, but instead returns a `tool_use` message, which means we have to
call the named tool with the provided parameters:

``` python
r = chat(pr)
r
```

    Finding the sum of 604542 and 6458932

- id: chatcmpl-Bjwyvg3bSWW0pKTxdwKCfhZKnwpho
- choices: \[Choice(finish_reason=‘tool_calls’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=None, refusal=None,
  role=‘assistant’, annotations=\[\], audio=None, function_call=None,
  tool_calls=\[ChatCompletionMessageToolCall(id=‘call_cry44pvhtr0KDszQFufZjyGN’,
  function=Function(arguments=‘{“a”:604542,“b”:6458932}’, name=‘sums’),
  type=‘function’)\]))\]
- created: 1750291185
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_51e1070cf2
- usage: CompletionUsage(completion_tokens=21, prompt_tokens=86,
  total_tokens=107,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

Cosette handles all that for us – we just have to pass along the
message, and it all happens automatically:

``` python
chat()
```

604,542 + 6,458,932 equals 7,063,474.

<details>

- id: chatcmpl-Bjx0vtvAnE4W7z0dupqPfqnJngBCy
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘604,542 + 6,458,932 equals
  7,063,474.’, refusal=None, role=‘assistant’, annotations=\[\],
  audio=None, function_call=None, tool_calls=None))\]
- created: 1750291309
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_51e1070cf2
- usage: CompletionUsage(completion_tokens=19, prompt_tokens=118,
  total_tokens=137,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

</details>

You can see how many tokens have been used at any time by checking the
`use` property.

``` python
chat.use
```

    In: 204; Out: 40; Total: 244

### Tool loop

We can do everything needed to use tools in a single step, by using
[`Chat.toolloop`](https://AnswerDotAI.github.io/cosette/toolloop.html#chat.toolloop).
This can even call multiple tools as needed solve a problem. For
example, let’s define a tool to handle multiplication:

``` python
def mults(
    a:int,  # First thing to multiply
    b:int=1 # Second thing to multiply
) -> int: # The product of the inputs
    "Multiplies a * b."
    print(f"Finding the product of {a} and {b}")
    return a * b
```

Now with a single call we can calculate `(a+b)*2` – by passing
`show_trace` we can see each response from the model in the process:

``` python
chat = Chat(model, sp=sp, tools=[sums,mults])
pr = f'Calculate ({a}+{b})*2'
pr
```

    'Calculate (604542+6458932)*2'

``` python
def pchoice(r): print(r.choices[0])
```

``` python
r = chat.toolloop(pr)
```

OpenAI uses special tags for math equations, which we can replace using
[`wrap_latex`](https://AnswerDotAI.github.io/cosette/core.html#wrap_latex):

``` python
for o in r:
    display(wrap_latex(contents(o)))
```

(604542 + 6458932) × 2 = 14,126,948.

## Images

As everyone knows, when testing image APIs you have to use a cute puppy.

``` python
fn = Path('samples/puppy.jpg')
Image(filename=fn, width=200)
```

<img src="index_files/figure-commonmark/cell-23-output-1.jpeg"
width="200" />

We create a
[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) object as
before:

``` python
chat = Chat(model)
```

Claudia expects images as a list of bytes, so we read in the file:

``` python
img = fn.read_bytes()
```

Prompts to Claudia can be lists, containing text, images, or both, eg:

``` python
chat([img, "In brief, what color flowers are in this image?"])
```

The flowers in the image are purple.

<details>

- id: chatcmpl-Bjx2lnRK05FvJWFh2smshhAWw0TXx
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘The flowers in the image are
  purple.’, refusal=None, role=‘assistant’, annotations=\[\],
  audio=None, function_call=None, tool_calls=None))\]
- created: 1750291423
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_51e1070cf2
- usage: CompletionUsage(completion_tokens=8, prompt_tokens=273,
  total_tokens=281,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

</details>

The image is included as input tokens.

``` python
chat.use
```

    In: 273; Out: 8; Total: 281

Alternatively, Cosette supports creating a multi-stage chat with
separate image and text prompts. For instance, you can pass just the
image as the initial prompt (in which case the model will make some
general comments about what it sees), and then follow up with questions
in additional prompts:

``` python
chat = Chat(model)
chat(img)
```

This is an image of an adorable puppy lying on the grass next to some
purple flowers. The puppy appears to be a Cavalier King Charles Spaniel,
known for their sweet expressions, long ears, and beautiful markings.
The scene looks peaceful and charming, with the flowers adding a touch
of color and nature to the setting.

<details>

- id: chatcmpl-Bjx2nluEtIzGnD5IMxE8c2RsG3CNW
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘This is an image of an adorable
  puppy lying on the grass next to some purple flowers. The puppy
  appears to be a Cavalier King Charles Spaniel, known for their sweet
  expressions, long ears, and beautiful markings. The scene looks
  peaceful and charming, with the flowers adding a touch of color and
  nature to the setting.’, refusal=None, role=‘assistant’,
  annotations=\[\], audio=None, function_call=None, tool_calls=None))\]
- created: 1750291425
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_51e1070cf2
- usage: CompletionUsage(completion_tokens=65, prompt_tokens=262,
  total_tokens=327,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

</details>

``` python
chat('What direction is the puppy facing?')
```

The puppy is facing towards the camera, looking directly at the viewer.
Its body is positioned sideways, but its head is turned forward, making
eye contact with the camera.

<details>

- id: chatcmpl-Bjx2pgeYeYLF5UHSd9iY68IeAjIxy
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘The puppy is facing towards the
  camera, looking directly at the viewer. Its body is positioned
  sideways, but its head is turned forward, making eye contact with the
  camera.’, refusal=None, role=‘assistant’, annotations=\[\],
  audio=None, function_call=None, tool_calls=None))\]
- created: 1750291427
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_51e1070cf2
- usage: CompletionUsage(completion_tokens=34, prompt_tokens=342,
  total_tokens=376,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

</details>

``` python
chat('What color is it?')
```

The puppy is predominantly white with brown markings, particularly on
its ears and around its eyes. Its nose is black. This color pattern is
common in certain breeds, such as the Cavalier King Charles Spaniel.

<details>

- id: chatcmpl-Bjx2rl2tCdnEcvWF78UN0UaSwnvUS
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘The puppy is predominantly
  white with brown markings, particularly on its ears and around its
  eyes. Its nose is black. This color pattern is common in certain
  breeds, such as the Cavalier King Charles Spaniel.’, refusal=None,
  role=‘assistant’, annotations=\[\], audio=None, function_call=None,
  tool_calls=None))\]
- created: 1750291429
- model: gpt-4.1-2025-04-14
- object: chat.completion
- service_tier: default
- system_fingerprint: fp_51e1070cf2
- usage: CompletionUsage(completion_tokens=42, prompt_tokens=389,
  total_tokens=431,
  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,
  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),
  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,
  cached_tokens=0))

</details>

Note that the image is passed in again for every input in the dialog, so
that number of input tokens increases quickly with this kind of chat.

``` python
chat.use
```

    In: 993; Out: 141; Total: 1134

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AnswerDotAI/cosette",
    "name": "cosette",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "nbdev jupyter notebook python",
    "author": "Jeremy Howard",
    "author_email": "j@fast.ai",
    "download_url": "https://files.pythonhosted.org/packages/73/04/64cc39a7567a4f39eacac9a30ef22c16c967bcd83d48ac1b0aad7ade8b08/cosette-0.2.3.tar.gz",
    "platform": null,
    "description": "# cosette\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Install\n\n``` sh\npip install cosette\n```\n\n## Getting started\n\nOpenAI\u2019s Python SDK will automatically be installed with Cosette, if you\ndon\u2019t already have it.\n\n``` python\nfrom cosette import *\n```\n\nCosette only exports the symbols that are needed to use the library, so\nyou can use `import *` to import them. Alternatively, just use:\n\n``` python\nimport cosette\n```\n\n\u2026and then add the prefix `cosette.` to any usages of the module.\n\nCosette provides `models`, which is a list of models currently available\nfrom the SDK.\n\n``` python\n' '.join(models)\n```\n\n    'o1-preview o1-mini gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-32k gpt-3.5-turbo gpt-3.5-turbo-instruct o1 o3-mini chatgpt-4o-latest o1-pro o3 o4-mini gpt-4.1 gpt-4.1-mini gpt-4.1-nano'\n\nFor these examples, we\u2019ll use GPT-4.1.\n\n``` python\nmodel = 'gpt-4.1'\n```\n\n## Chat\n\nThe main interface to Cosette is the\n[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) class,\nwhich provides a stateful interface to the models:\n\n``` python\nchat = Chat(model, sp=\"\"\"You are a helpful and concise assistant.\"\"\")\nchat(\"I'm Jeremy\")\n```\n\nHi Jeremy! How can I help you today?\n\n<details>\n\n- id: chatcmpl-BjwyifaV82goo6WYIeEORBGDMLCSA\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018Hi Jeremy! How can I help you\n  today?\u2019, refusal=None, role=\u2018assistant\u2019, annotations=\\[\\], audio=None,\n  function_call=None, tool_calls=None))\\]\n- created: 1750291172\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_51e1070cf2\n- usage: CompletionUsage(completion_tokens=10, prompt_tokens=21,\n  total_tokens=31,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\n</details>\n\n``` python\nr = chat(\"What's my name?\")\nr\n```\n\nYour name is Jeremy. How can I assist you, Jeremy?\n\n<details>\n\n- id: chatcmpl-BjwyjN4t2wKzVWBRVWhD6buZF8y07\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018Your name is Jeremy. How can I\n  assist you, Jeremy?\u2019, refusal=None, role=\u2018assistant\u2019,\n  annotations=\\[\\], audio=None, function_call=None, tool_calls=None))\\]\n- created: 1750291173\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_b3f1157249\n- usage: CompletionUsage(completion_tokens=13, prompt_tokens=43,\n  total_tokens=56,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\n</details>\n\nAs you see above, displaying the results of a call in a notebook shows\njust the message contents, with the other details hidden behind a\ncollapsible section. Alternatively you can `print` the details:\n\n``` python\nprint(r)\n```\n\n    ChatCompletion(id='chatcmpl-BjwyjN4t2wKzVWBRVWhD6buZF8y07', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Your name is Jeremy. How can I assist you, Jeremy?', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))], created=1750291173, model='gpt-4.1-2025-04-14', object='chat.completion', service_tier='default', system_fingerprint='fp_b3f1157249', usage=In: 43; Out: 13; Total: 56)\n\nYou can use `stream=True` to stream the results as soon as they arrive\n(although you will only see the gradual generation if you execute the\nnotebook yourself, of course!)\n\n``` python\nfor o in chat(\"What's your name?\", stream=True): print(o, end='')\n```\n\n    I\u2019m an AI assistant created by OpenAI, and you can just call me Assistant! If you\u2019d like to give me a nickname, feel free\u2014what would you like to call me?\n\n## Model Capabilities\n\nDifferent OpenAI models have different capabilities. Some models such as\no1-mini do not have support for streaming, system prompts, or\ntemperature. Query these capbilities using these functions:\n\n``` python\n# o1 does not support streaming or setting the temperature\ncan_stream('o1'), can_set_system_prompt('o1'), can_set_temperature('o1')\n```\n\n    (True, True, False)\n\n``` python\n# gpt-4o has these capabilities\ncan_stream('gpt-4o'), can_set_system_prompt('gpt-4o'), can_set_temperature('gpt-4o')\n```\n\n    (True, True, True)\n\n## Tool use\n\n[Tool use](https://docs.openai.com/claude/docs/tool-use) lets the model\nuse external tools.\n\nWe use [docments](https://fastcore.fast.ai/docments.html) to make\ndefining Python functions as ergonomic as possible. Each parameter (and\nthe return value) should have a type, and a docments comment with the\ndescription of what it is. As an example we\u2019ll write a simple function\nthat adds numbers together, and will tell us when it\u2019s being called:\n\n``` python\ndef sums(\n    a:int,  # First thing to sum\n    b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n    \"Adds a + b.\"\n    print(f\"Finding the sum of {a} and {b}\")\n    return a + b\n```\n\nSometimes the model will say something like \u201caccording to the `sums`\ntool the answer is\u201d \u2013 generally we\u2019d rather it just tells the user the\nanswer, so we can use a system prompt to help with this:\n\n``` python\nsp = \"Never mention what tools you use.\"\n```\n\nWe\u2019ll get the model to add up some long numbers:\n\n``` python\na,b = 604542,6458932\npr = f\"What is {a}+{b}?\"\npr\n```\n\n    'What is 604542+6458932?'\n\nTo use tools, pass a list of them to\n[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat):\n\n``` python\nchat = Chat(model, sp=sp, tools=[sums])\n```\n\nNow when we call that with our prompt, the model doesn\u2019t return the\nanswer, but instead returns a `tool_use` message, which means we have to\ncall the named tool with the provided parameters:\n\n``` python\nr = chat(pr)\nr\n```\n\n    Finding the sum of 604542 and 6458932\n\n- id: chatcmpl-Bjwyvg3bSWW0pKTxdwKCfhZKnwpho\n- choices: \\[Choice(finish_reason=\u2018tool_calls\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=None, refusal=None,\n  role=\u2018assistant\u2019, annotations=\\[\\], audio=None, function_call=None,\n  tool_calls=\\[ChatCompletionMessageToolCall(id=\u2018call_cry44pvhtr0KDszQFufZjyGN\u2019,\n  function=Function(arguments=\u2018{\u201ca\u201d:604542,\u201cb\u201d:6458932}\u2019, name=\u2018sums\u2019),\n  type=\u2018function\u2019)\\]))\\]\n- created: 1750291185\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_51e1070cf2\n- usage: CompletionUsage(completion_tokens=21, prompt_tokens=86,\n  total_tokens=107,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\nCosette handles all that for us \u2013 we just have to pass along the\nmessage, and it all happens automatically:\n\n``` python\nchat()\n```\n\n604,542 + 6,458,932 equals 7,063,474.\n\n<details>\n\n- id: chatcmpl-Bjx0vtvAnE4W7z0dupqPfqnJngBCy\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018604,542 + 6,458,932 equals\n  7,063,474.\u2019, refusal=None, role=\u2018assistant\u2019, annotations=\\[\\],\n  audio=None, function_call=None, tool_calls=None))\\]\n- created: 1750291309\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_51e1070cf2\n- usage: CompletionUsage(completion_tokens=19, prompt_tokens=118,\n  total_tokens=137,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\n</details>\n\nYou can see how many tokens have been used at any time by checking the\n`use` property.\n\n``` python\nchat.use\n```\n\n    In: 204; Out: 40; Total: 244\n\n### Tool loop\n\nWe can do everything needed to use tools in a single step, by using\n[`Chat.toolloop`](https://AnswerDotAI.github.io/cosette/toolloop.html#chat.toolloop).\nThis can even call multiple tools as needed solve a problem. For\nexample, let\u2019s define a tool to handle multiplication:\n\n``` python\ndef mults(\n    a:int,  # First thing to multiply\n    b:int=1 # Second thing to multiply\n) -> int: # The product of the inputs\n    \"Multiplies a * b.\"\n    print(f\"Finding the product of {a} and {b}\")\n    return a * b\n```\n\nNow with a single call we can calculate `(a+b)*2` \u2013 by passing\n`show_trace` we can see each response from the model in the process:\n\n``` python\nchat = Chat(model, sp=sp, tools=[sums,mults])\npr = f'Calculate ({a}+{b})*2'\npr\n```\n\n    'Calculate (604542+6458932)*2'\n\n``` python\ndef pchoice(r): print(r.choices[0])\n```\n\n``` python\nr = chat.toolloop(pr)\n```\n\nOpenAI uses special tags for math equations, which we can replace using\n[`wrap_latex`](https://AnswerDotAI.github.io/cosette/core.html#wrap_latex):\n\n``` python\nfor o in r:\n    display(wrap_latex(contents(o)))\n```\n\n(604542 + 6458932) \u00d7 2 = 14,126,948.\n\n## Images\n\nAs everyone knows, when testing image APIs you have to use a cute puppy.\n\n``` python\nfn = Path('samples/puppy.jpg')\nImage(filename=fn, width=200)\n```\n\n<img src=\"index_files/figure-commonmark/cell-23-output-1.jpeg\"\nwidth=\"200\" />\n\nWe create a\n[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) object as\nbefore:\n\n``` python\nchat = Chat(model)\n```\n\nClaudia expects images as a list of bytes, so we read in the file:\n\n``` python\nimg = fn.read_bytes()\n```\n\nPrompts to Claudia can be lists, containing text, images, or both, eg:\n\n``` python\nchat([img, \"In brief, what color flowers are in this image?\"])\n```\n\nThe flowers in the image are purple.\n\n<details>\n\n- id: chatcmpl-Bjx2lnRK05FvJWFh2smshhAWw0TXx\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018The flowers in the image are\n  purple.\u2019, refusal=None, role=\u2018assistant\u2019, annotations=\\[\\],\n  audio=None, function_call=None, tool_calls=None))\\]\n- created: 1750291423\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_51e1070cf2\n- usage: CompletionUsage(completion_tokens=8, prompt_tokens=273,\n  total_tokens=281,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\n</details>\n\nThe image is included as input tokens.\n\n``` python\nchat.use\n```\n\n    In: 273; Out: 8; Total: 281\n\nAlternatively, Cosette supports creating a multi-stage chat with\nseparate image and text prompts. For instance, you can pass just the\nimage as the initial prompt (in which case the model will make some\ngeneral comments about what it sees), and then follow up with questions\nin additional prompts:\n\n``` python\nchat = Chat(model)\nchat(img)\n```\n\nThis is an image of an adorable puppy lying on the grass next to some\npurple flowers. The puppy appears to be a Cavalier King Charles Spaniel,\nknown for their sweet expressions, long ears, and beautiful markings.\nThe scene looks peaceful and charming, with the flowers adding a touch\nof color and nature to the setting.\n\n<details>\n\n- id: chatcmpl-Bjx2nluEtIzGnD5IMxE8c2RsG3CNW\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018This is an image of an adorable\n  puppy lying on the grass next to some purple flowers. The puppy\n  appears to be a Cavalier King Charles Spaniel, known for their sweet\n  expressions, long ears, and beautiful markings. The scene looks\n  peaceful and charming, with the flowers adding a touch of color and\n  nature to the setting.\u2019, refusal=None, role=\u2018assistant\u2019,\n  annotations=\\[\\], audio=None, function_call=None, tool_calls=None))\\]\n- created: 1750291425\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_51e1070cf2\n- usage: CompletionUsage(completion_tokens=65, prompt_tokens=262,\n  total_tokens=327,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\n</details>\n\n``` python\nchat('What direction is the puppy facing?')\n```\n\nThe puppy is facing towards the camera, looking directly at the viewer.\nIts body is positioned sideways, but its head is turned forward, making\neye contact with the camera.\n\n<details>\n\n- id: chatcmpl-Bjx2pgeYeYLF5UHSd9iY68IeAjIxy\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018The puppy is facing towards the\n  camera, looking directly at the viewer. Its body is positioned\n  sideways, but its head is turned forward, making eye contact with the\n  camera.\u2019, refusal=None, role=\u2018assistant\u2019, annotations=\\[\\],\n  audio=None, function_call=None, tool_calls=None))\\]\n- created: 1750291427\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_51e1070cf2\n- usage: CompletionUsage(completion_tokens=34, prompt_tokens=342,\n  total_tokens=376,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\n</details>\n\n``` python\nchat('What color is it?')\n```\n\nThe puppy is predominantly white with brown markings, particularly on\nits ears and around its eyes. Its nose is black. This color pattern is\ncommon in certain breeds, such as the Cavalier King Charles Spaniel.\n\n<details>\n\n- id: chatcmpl-Bjx2rl2tCdnEcvWF78UN0UaSwnvUS\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018The puppy is predominantly\n  white with brown markings, particularly on its ears and around its\n  eyes. Its nose is black. This color pattern is common in certain\n  breeds, such as the Cavalier King Charles Spaniel.\u2019, refusal=None,\n  role=\u2018assistant\u2019, annotations=\\[\\], audio=None, function_call=None,\n  tool_calls=None))\\]\n- created: 1750291429\n- model: gpt-4.1-2025-04-14\n- object: chat.completion\n- service_tier: default\n- system_fingerprint: fp_51e1070cf2\n- usage: CompletionUsage(completion_tokens=42, prompt_tokens=389,\n  total_tokens=431,\n  completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0,\n  audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0),\n  prompt_tokens_details=PromptTokensDetails(audio_tokens=0,\n  cached_tokens=0))\n\n</details>\n\nNote that the image is passed in again for every input in the dialog, so\nthat number of input tokens increases quickly with this kind of chat.\n\n``` python\nchat.use\n```\n\n    In: 993; Out: 141; Total: 1134\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "A helper for using the OpenAI API",
    "version": "0.2.3",
    "project_urls": {
        "Homepage": "https://github.com/AnswerDotAI/cosette"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "949ea86c11c79c63de0c3273831c1e3560fc281758df72b909cd37246a028c19",
                "md5": "b58766ef82e8cada9810b5274e658570",
                "sha256": "a31be26803dc72a9f758c9a819b052459b08364a10b2dda6bde20c3f2c0913bb"
            },
            "downloads": -1,
            "filename": "cosette-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b58766ef82e8cada9810b5274e658570",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 15225,
            "upload_time": "2025-08-09T23:20:20",
            "upload_time_iso_8601": "2025-08-09T23:20:20.093299Z",
            "url": "https://files.pythonhosted.org/packages/94/9e/a86c11c79c63de0c3273831c1e3560fc281758df72b909cd37246a028c19/cosette-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "730464cc39a7567a4f39eacac9a30ef22c16c967bcd83d48ac1b0aad7ade8b08",
                "md5": "9a4f4f989aa6db56d836bf3978652956",
                "sha256": "6d5e18f902f19e12fdd1d1b35a9cbf1ec98741d3f621111f3433f152bb7c98e3"
            },
            "downloads": -1,
            "filename": "cosette-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "9a4f4f989aa6db56d836bf3978652956",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 20032,
            "upload_time": "2025-08-09T23:20:21",
            "upload_time_iso_8601": "2025-08-09T23:20:21.827269Z",
            "url": "https://files.pythonhosted.org/packages/73/04/64cc39a7567a4f39eacac9a30ef22c16c967bcd83d48ac1b0aad7ade8b08/cosette-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-09 23:20:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AnswerDotAI",
    "github_project": "cosette",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "cosette"
}
        
Elapsed time: 1.59334s