cosette


Namecosette JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/AnswerDotAI/cosette
SummaryA helper for using the OpenAI API
upload_time2024-10-24 05:21:48
maintainerNone
docs_urlNone
authorJeremy Howard
requires_python>=3.8
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # cosette


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Install

``` sh
pip install cosette
```

## Getting started

OpenAI’s Python SDK will automatically be installed with Cosette, if you
don’t already have it.

``` python
from cosette import *
```

Cosette only exports the symbols that are needed to use the library, so
you can use `import *` to import them. Alternatively, just use:

``` python
import cosette
```

…and then add the prefix `cosette.` to any usages of the module.

Cosette provides `models`, which is a list of models currently available
from the SDK.

``` python
models
```

    ('gpt-4o',
     'gpt-4-turbo',
     'gpt-4',
     'gpt-4-32k',
     'gpt-3.5-turbo',
     'gpt-3.5-turbo-instruct')

For these examples, we’ll use GPT-4o.

``` python
model = models[0]
```

## Chat

The main interface to Cosette is the
[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) class,
which provides a stateful interface to the models:

``` python
chat = Chat(model, sp="""You are a helpful and concise assistant.""")
chat("I'm Jeremy")
```

Hi Jeremy! How can I assist you today?

<details>

- id: chatcmpl-9R8Z0uRHgWl7XaV6yJtahVDyDTzMZ
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘Hi Jeremy! How can I assist you
  today?’, role=‘assistant’, function_call=None, tool_calls=None))\]
- created: 1716254802
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_729ea513f7
- usage: CompletionUsage(completion_tokens=10, prompt_tokens=21,
  total_tokens=31)

</details>

``` python
r = chat("What's my name?")
r
```

Your name is Jeremy. How can I assist you further?

<details>

- id: chatcmpl-9R8Z1c76TFqYFYjyON08CbkAmjerN
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘Your name is Jeremy. How can I
  assist you further?’, role=‘assistant’, function_call=None,
  tool_calls=None))\]
- created: 1716254803
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_729ea513f7
- usage: CompletionUsage(completion_tokens=12, prompt_tokens=43,
  total_tokens=55)

</details>

As you see above, displaying the results of a call in a notebook shows
just the message contents, with the other details hidden behind a
collapsible section. Alternatively you can `print` the details:

``` python
print(r)
```

    ChatCompletion(id='chatcmpl-9R8Z1c76TFqYFYjyON08CbkAmjerN', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Your name is Jeremy. How can I assist you further?', role='assistant', function_call=None, tool_calls=None))], created=1716254803, model='gpt-4o-2024-05-13', object='chat.completion', system_fingerprint='fp_729ea513f7', usage=In: 43; Out: 12; Total: 55)

You can use `stream=True` to stream the results as soon as they arrive
(although you will only see the gradual generation if you execute the
notebook yourself, of course!)

``` python
for o in chat("What's your name?", stream=True): print(o, end='')
```

    I don't have a personal name, but you can call me Assistant. How can I help you today, Jeremy?

## Tool use

[Tool use](https://docs.openai.com/claude/docs/tool-use) lets the model
use external tools.

We use [docments](https://fastcore.fast.ai/docments.html) to make
defining Python functions as ergonomic as possible. Each parameter (and
the return value) should have a type, and a docments comment with the
description of what it is. As an example we’ll write a simple function
that adds numbers together, and will tell us when it’s being called:

``` python
def sums(
    a:int,  # First thing to sum
    b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
    "Adds a + b."
    print(f"Finding the sum of {a} and {b}")
    return a + b
```

Sometimes the model will say something like “according to the `sums`
tool the answer is” – generally we’d rather it just tells the user the
answer, so we can use a system prompt to help with this:

``` python
sp = "Never mention what tools you use."
```

We’ll get the model to add up some long numbers:

``` python
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
pr
```

    'What is 604542+6458932?'

To use tools, pass a list of them to
[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat):

``` python
chat = Chat(model, sp=sp, tools=[sums])
```

Now when we call that with our prompt, the model doesn’t return the
answer, but instead returns a `tool_use` message, which means we have to
call the named tool with the provided parameters:

``` python
r = chat(pr)
r
```

    Finding the sum of 604542 and 6458932

- id: chatcmpl-9R8Z2JNenseQyQoseIs8XNImmy2Bo
- choices: \[Choice(finish_reason=‘tool_calls’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=None, role=‘assistant’,
  function_call=None,
  tool_calls=\[ChatCompletionMessageToolCall(id=‘call_HV4yaZEY1OYK1zYouAcVwfZK’,
  function=Function(arguments=‘{“a”:604542,“b”:6458932}’, name=‘sums’),
  type=‘function’)\]))\]
- created: 1716254804
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_729ea513f7
- usage: CompletionUsage(completion_tokens=21, prompt_tokens=96,
  total_tokens=117)

Cosette handles all that for us – we just have to pass along the
message, and it all happens automatically:

``` python
chat()
```

The sum of 604542 and 6458932 is 7063474.

<details>

- id: chatcmpl-9R8Z4CrFU3zd71acZzdCsQFQDHxp9
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘The sum of 604542 and 6458932
  is 7063474.’, role=‘assistant’, function_call=None,
  tool_calls=None))\]
- created: 1716254806
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_729ea513f7
- usage: CompletionUsage(completion_tokens=18, prompt_tokens=128,
  total_tokens=146)

</details>

You can see how many tokens have been used at any time by checking the
`use` property.

``` python
chat.use
```

    In: 224; Out: 39; Total: 263

### Tool loop

We can do everything needed to use tools in a single step, by using
[`Chat.toolloop`](https://AnswerDotAI.github.io/cosette/toolloop.html#chat.toolloop).
This can even call multiple tools as needed solve a problem. For
example, let’s define a tool to handle multiplication:

``` python
def mults(
    a:int,  # First thing to multiply
    b:int=1 # Second thing to multiply
) -> int: # The product of the inputs
    "Multiplies a * b."
    print(f"Finding the product of {a} and {b}")
    return a * b
```

Now with a single call we can calculate `(a+b)*2` – by passing
`show_trace` we can see each response from the model in the process:

``` python
chat = Chat(model, sp=sp, tools=[sums,mults])
pr = f'Calculate ({a}+{b})*2'
pr
```

    'Calculate (604542+6458932)*2'

``` python
def pchoice(r): print(r.choices[0])
```

``` python
r = chat.toolloop(pr, trace_func=pchoice)
```

    Finding the sum of 604542 and 6458932
    Finding the product of 2 and 1
    Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_OfypQBQoAuIUksucevaxwH5Z', function=Function(arguments='{"a": 604542, "b": 6458932}', name='sums'), type='function'), ChatCompletionMessageToolCall(id='call_yKAL5o96cDef83OFJhDB21MM', function=Function(arguments='{"a": 2}', name='mults'), type='function')]))
    Finding the product of 7063474 and 2
    Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Ffye7Tf65CjVjwwx8Sp8031i', function=Function(arguments='{"a":7063474,"b":2}', name='mults'), type='function')]))
    Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The result of \\((604542 + 6458932) \\times 2\\) is 14,126,948.', role='assistant', function_call=None, tool_calls=None))

OpenAI uses special tags for math equations, which we can replace using
[`wrap_latex`](https://AnswerDotAI.github.io/cosette/core.html#wrap_latex):

``` python
wrap_latex(contents(r))
```

The result of $(604542 + 6458932) \times 2$ is 14,126,948.

## Images

As everyone knows, when testing image APIs you have to use a cute puppy.

``` python
fn = Path('samples/puppy.jpg')
display.Image(filename=fn, width=200)
```

<img src="index_files/figure-commonmark/cell-21-output-1.jpeg"
width="200" />

We create a
[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) object as
before:

``` python
chat = Chat(model)
```

Claudia expects images as a list of bytes, so we read in the file:

``` python
img = fn.read_bytes()
```

Prompts to Claudia can be lists, containing text, images, or both, eg:

``` python
chat([img, "In brief, what color flowers are in this image?"])
```

The flowers in the image are purple.

<details>

- id: chatcmpl-9R8Vqpx62OezZDjAt3SIfnjMpH3I8
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘The flowers in the image are
  purple.’, role=‘assistant’, function_call=None, tool_calls=None))\]
- created: 1716254606
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_927397958d
- usage: CompletionUsage(completion_tokens=8, prompt_tokens=273,
  total_tokens=281)

</details>

The image is included as input tokens.

``` python
chat.use
```

    In: 273; Out: 8; Total: 281

Alternatively, Cosette supports creating a multi-stage chat with
separate image and text prompts. For instance, you can pass just the
image as the initial prompt (in which case the model will make some
general comments about what it sees), and then follow up with questions
in additional prompts:

``` python
chat = Chat(model)
chat(img)
```

What an adorable puppy! This puppy has a white and light brown coat and
is lying on green grass next to some purple flowers. Puppies like this
are commonly seen from breeds such as Cavalier King Charles Spaniels,
though without more context, it’s difficult to identify the breed
precisely. It looks very playful and cute!

<details>

- id: chatcmpl-9R8VsAnTWr9k1DShC7mZsnhRtqxRA
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=“What an adorable puppy! This
  puppy has a white and light brown coat and is lying on green grass
  next to some purple flowers. Puppies like this are commonly seen from
  breeds such as Cavalier King Charles Spaniels, though without more
  context, it’s difficult to identify the breed precisely. It looks very
  playful and cute!”, role=‘assistant’, function_call=None,
  tool_calls=None))\]
- created: 1716254608
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_927397958d
- usage: CompletionUsage(completion_tokens=63, prompt_tokens=262,
  total_tokens=325)

</details>

``` python
chat('What direction is the puppy facing?')
```

The puppy is facing slightly to the right of the camera, with its head
turned towards the viewer. Its body is positioned in such a way that
suggests it is laying down or resting on the grass.

<details>

- id: chatcmpl-9R8VuzGIABwg341oOHMXbGGa7daya
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘The puppy is facing slightly to
  the right of the camera, with its head turned towards the viewer. Its
  body is positioned in such a way that suggests it is laying down or
  resting on the grass.’, role=‘assistant’, function_call=None,
  tool_calls=None))\]
- created: 1716254610
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_927397958d
- usage: CompletionUsage(completion_tokens=40, prompt_tokens=340,
  total_tokens=380)

</details>

``` python
chat('What color is it?')
```

The puppy has a predominantly white coat with light brown patches,
particularly around its ears and eyes. This coloration is commonly seen
in certain breeds, such as the Cavalier King Charles Spaniel.

<details>

- id: chatcmpl-9R8Vwtlu6aDEGQ8O7bZFk8rfT9FGL
- choices: \[Choice(finish_reason=‘stop’, index=0, logprobs=None,
  message=ChatCompletionMessage(content=‘The puppy has a predominantly
  white coat with light brown patches, particularly around its ears and
  eyes. This coloration is commonly seen in certain breeds, such as the
  Cavalier King Charles Spaniel.’, role=‘assistant’, function_call=None,
  tool_calls=None))\]
- created: 1716254612
- model: gpt-4o-2024-05-13
- object: chat.completion
- system_fingerprint: fp_927397958d
- usage: CompletionUsage(completion_tokens=38, prompt_tokens=393,
  total_tokens=431)

</details>

Note that the image is passed in again for every input in the dialog, so
that number of input tokens increases quickly with this kind of chat.

``` python
chat.use
```

    In: 995; Out: 141; Total: 1136

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AnswerDotAI/cosette",
    "name": "cosette",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "nbdev jupyter notebook python",
    "author": "Jeremy Howard",
    "author_email": "j@fast.ai",
    "download_url": "https://files.pythonhosted.org/packages/75/68/7a391842b1c7b2435788ae1239ed799619ae3b8703cd6207cd273e63434b/cosette-0.0.4.tar.gz",
    "platform": null,
    "description": "# cosette\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Install\n\n``` sh\npip install cosette\n```\n\n## Getting started\n\nOpenAI\u2019s Python SDK will automatically be installed with Cosette, if you\ndon\u2019t already have it.\n\n``` python\nfrom cosette import *\n```\n\nCosette only exports the symbols that are needed to use the library, so\nyou can use `import *` to import them. Alternatively, just use:\n\n``` python\nimport cosette\n```\n\n\u2026and then add the prefix `cosette.` to any usages of the module.\n\nCosette provides `models`, which is a list of models currently available\nfrom the SDK.\n\n``` python\nmodels\n```\n\n    ('gpt-4o',\n     'gpt-4-turbo',\n     'gpt-4',\n     'gpt-4-32k',\n     'gpt-3.5-turbo',\n     'gpt-3.5-turbo-instruct')\n\nFor these examples, we\u2019ll use GPT-4o.\n\n``` python\nmodel = models[0]\n```\n\n## Chat\n\nThe main interface to Cosette is the\n[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) class,\nwhich provides a stateful interface to the models:\n\n``` python\nchat = Chat(model, sp=\"\"\"You are a helpful and concise assistant.\"\"\")\nchat(\"I'm Jeremy\")\n```\n\nHi Jeremy! How can I assist you today?\n\n<details>\n\n- id: chatcmpl-9R8Z0uRHgWl7XaV6yJtahVDyDTzMZ\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018Hi Jeremy! How can I assist you\n  today?\u2019, role=\u2018assistant\u2019, function_call=None, tool_calls=None))\\]\n- created: 1716254802\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_729ea513f7\n- usage: CompletionUsage(completion_tokens=10, prompt_tokens=21,\n  total_tokens=31)\n\n</details>\n\n``` python\nr = chat(\"What's my name?\")\nr\n```\n\nYour name is Jeremy. How can I assist you further?\n\n<details>\n\n- id: chatcmpl-9R8Z1c76TFqYFYjyON08CbkAmjerN\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018Your name is Jeremy. How can I\n  assist you further?\u2019, role=\u2018assistant\u2019, function_call=None,\n  tool_calls=None))\\]\n- created: 1716254803\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_729ea513f7\n- usage: CompletionUsage(completion_tokens=12, prompt_tokens=43,\n  total_tokens=55)\n\n</details>\n\nAs you see above, displaying the results of a call in a notebook shows\njust the message contents, with the other details hidden behind a\ncollapsible section. Alternatively you can `print` the details:\n\n``` python\nprint(r)\n```\n\n    ChatCompletion(id='chatcmpl-9R8Z1c76TFqYFYjyON08CbkAmjerN', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Your name is Jeremy. How can I assist you further?', role='assistant', function_call=None, tool_calls=None))], created=1716254803, model='gpt-4o-2024-05-13', object='chat.completion', system_fingerprint='fp_729ea513f7', usage=In: 43; Out: 12; Total: 55)\n\nYou can use `stream=True` to stream the results as soon as they arrive\n(although you will only see the gradual generation if you execute the\nnotebook yourself, of course!)\n\n``` python\nfor o in chat(\"What's your name?\", stream=True): print(o, end='')\n```\n\n    I don't have a personal name, but you can call me Assistant. How can I help you today, Jeremy?\n\n## Tool use\n\n[Tool use](https://docs.openai.com/claude/docs/tool-use) lets the model\nuse external tools.\n\nWe use [docments](https://fastcore.fast.ai/docments.html) to make\ndefining Python functions as ergonomic as possible. Each parameter (and\nthe return value) should have a type, and a docments comment with the\ndescription of what it is. As an example we\u2019ll write a simple function\nthat adds numbers together, and will tell us when it\u2019s being called:\n\n``` python\ndef sums(\n    a:int,  # First thing to sum\n    b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n    \"Adds a + b.\"\n    print(f\"Finding the sum of {a} and {b}\")\n    return a + b\n```\n\nSometimes the model will say something like \u201caccording to the `sums`\ntool the answer is\u201d \u2013 generally we\u2019d rather it just tells the user the\nanswer, so we can use a system prompt to help with this:\n\n``` python\nsp = \"Never mention what tools you use.\"\n```\n\nWe\u2019ll get the model to add up some long numbers:\n\n``` python\na,b = 604542,6458932\npr = f\"What is {a}+{b}?\"\npr\n```\n\n    'What is 604542+6458932?'\n\nTo use tools, pass a list of them to\n[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat):\n\n``` python\nchat = Chat(model, sp=sp, tools=[sums])\n```\n\nNow when we call that with our prompt, the model doesn\u2019t return the\nanswer, but instead returns a `tool_use` message, which means we have to\ncall the named tool with the provided parameters:\n\n``` python\nr = chat(pr)\nr\n```\n\n    Finding the sum of 604542 and 6458932\n\n- id: chatcmpl-9R8Z2JNenseQyQoseIs8XNImmy2Bo\n- choices: \\[Choice(finish_reason=\u2018tool_calls\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=None, role=\u2018assistant\u2019,\n  function_call=None,\n  tool_calls=\\[ChatCompletionMessageToolCall(id=\u2018call_HV4yaZEY1OYK1zYouAcVwfZK\u2019,\n  function=Function(arguments=\u2018{\u201ca\u201d:604542,\u201cb\u201d:6458932}\u2019, name=\u2018sums\u2019),\n  type=\u2018function\u2019)\\]))\\]\n- created: 1716254804\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_729ea513f7\n- usage: CompletionUsage(completion_tokens=21, prompt_tokens=96,\n  total_tokens=117)\n\nCosette handles all that for us \u2013 we just have to pass along the\nmessage, and it all happens automatically:\n\n``` python\nchat()\n```\n\nThe sum of 604542 and 6458932 is 7063474.\n\n<details>\n\n- id: chatcmpl-9R8Z4CrFU3zd71acZzdCsQFQDHxp9\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018The sum of 604542 and 6458932\n  is 7063474.\u2019, role=\u2018assistant\u2019, function_call=None,\n  tool_calls=None))\\]\n- created: 1716254806\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_729ea513f7\n- usage: CompletionUsage(completion_tokens=18, prompt_tokens=128,\n  total_tokens=146)\n\n</details>\n\nYou can see how many tokens have been used at any time by checking the\n`use` property.\n\n``` python\nchat.use\n```\n\n    In: 224; Out: 39; Total: 263\n\n### Tool loop\n\nWe can do everything needed to use tools in a single step, by using\n[`Chat.toolloop`](https://AnswerDotAI.github.io/cosette/toolloop.html#chat.toolloop).\nThis can even call multiple tools as needed solve a problem. For\nexample, let\u2019s define a tool to handle multiplication:\n\n``` python\ndef mults(\n    a:int,  # First thing to multiply\n    b:int=1 # Second thing to multiply\n) -> int: # The product of the inputs\n    \"Multiplies a * b.\"\n    print(f\"Finding the product of {a} and {b}\")\n    return a * b\n```\n\nNow with a single call we can calculate `(a+b)*2` \u2013 by passing\n`show_trace` we can see each response from the model in the process:\n\n``` python\nchat = Chat(model, sp=sp, tools=[sums,mults])\npr = f'Calculate ({a}+{b})*2'\npr\n```\n\n    'Calculate (604542+6458932)*2'\n\n``` python\ndef pchoice(r): print(r.choices[0])\n```\n\n``` python\nr = chat.toolloop(pr, trace_func=pchoice)\n```\n\n    Finding the sum of 604542 and 6458932\n    Finding the product of 2 and 1\n    Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_OfypQBQoAuIUksucevaxwH5Z', function=Function(arguments='{\"a\": 604542, \"b\": 6458932}', name='sums'), type='function'), ChatCompletionMessageToolCall(id='call_yKAL5o96cDef83OFJhDB21MM', function=Function(arguments='{\"a\": 2}', name='mults'), type='function')]))\n    Finding the product of 7063474 and 2\n    Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Ffye7Tf65CjVjwwx8Sp8031i', function=Function(arguments='{\"a\":7063474,\"b\":2}', name='mults'), type='function')]))\n    Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The result of \\\\((604542 + 6458932) \\\\times 2\\\\) is 14,126,948.', role='assistant', function_call=None, tool_calls=None))\n\nOpenAI uses special tags for math equations, which we can replace using\n[`wrap_latex`](https://AnswerDotAI.github.io/cosette/core.html#wrap_latex):\n\n``` python\nwrap_latex(contents(r))\n```\n\nThe result of $(604542 + 6458932) \\times 2$ is 14,126,948.\n\n## Images\n\nAs everyone knows, when testing image APIs you have to use a cute puppy.\n\n``` python\nfn = Path('samples/puppy.jpg')\ndisplay.Image(filename=fn, width=200)\n```\n\n<img src=\"index_files/figure-commonmark/cell-21-output-1.jpeg\"\nwidth=\"200\" />\n\nWe create a\n[`Chat`](https://AnswerDotAI.github.io/cosette/core.html#chat) object as\nbefore:\n\n``` python\nchat = Chat(model)\n```\n\nClaudia expects images as a list of bytes, so we read in the file:\n\n``` python\nimg = fn.read_bytes()\n```\n\nPrompts to Claudia can be lists, containing text, images, or both, eg:\n\n``` python\nchat([img, \"In brief, what color flowers are in this image?\"])\n```\n\nThe flowers in the image are purple.\n\n<details>\n\n- id: chatcmpl-9R8Vqpx62OezZDjAt3SIfnjMpH3I8\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018The flowers in the image are\n  purple.\u2019, role=\u2018assistant\u2019, function_call=None, tool_calls=None))\\]\n- created: 1716254606\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_927397958d\n- usage: CompletionUsage(completion_tokens=8, prompt_tokens=273,\n  total_tokens=281)\n\n</details>\n\nThe image is included as input tokens.\n\n``` python\nchat.use\n```\n\n    In: 273; Out: 8; Total: 281\n\nAlternatively, Cosette supports creating a multi-stage chat with\nseparate image and text prompts. For instance, you can pass just the\nimage as the initial prompt (in which case the model will make some\ngeneral comments about what it sees), and then follow up with questions\nin additional prompts:\n\n``` python\nchat = Chat(model)\nchat(img)\n```\n\nWhat an adorable puppy! This puppy has a white and light brown coat and\nis lying on green grass next to some purple flowers. Puppies like this\nare commonly seen from breeds such as Cavalier King Charles Spaniels,\nthough without more context, it\u2019s difficult to identify the breed\nprecisely. It looks very playful and cute!\n\n<details>\n\n- id: chatcmpl-9R8VsAnTWr9k1DShC7mZsnhRtqxRA\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u201cWhat an adorable puppy! This\n  puppy has a white and light brown coat and is lying on green grass\n  next to some purple flowers. Puppies like this are commonly seen from\n  breeds such as Cavalier King Charles Spaniels, though without more\n  context, it\u2019s difficult to identify the breed precisely. It looks very\n  playful and cute!\u201d, role=\u2018assistant\u2019, function_call=None,\n  tool_calls=None))\\]\n- created: 1716254608\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_927397958d\n- usage: CompletionUsage(completion_tokens=63, prompt_tokens=262,\n  total_tokens=325)\n\n</details>\n\n``` python\nchat('What direction is the puppy facing?')\n```\n\nThe puppy is facing slightly to the right of the camera, with its head\nturned towards the viewer. Its body is positioned in such a way that\nsuggests it is laying down or resting on the grass.\n\n<details>\n\n- id: chatcmpl-9R8VuzGIABwg341oOHMXbGGa7daya\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018The puppy is facing slightly to\n  the right of the camera, with its head turned towards the viewer. Its\n  body is positioned in such a way that suggests it is laying down or\n  resting on the grass.\u2019, role=\u2018assistant\u2019, function_call=None,\n  tool_calls=None))\\]\n- created: 1716254610\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_927397958d\n- usage: CompletionUsage(completion_tokens=40, prompt_tokens=340,\n  total_tokens=380)\n\n</details>\n\n``` python\nchat('What color is it?')\n```\n\nThe puppy has a predominantly white coat with light brown patches,\nparticularly around its ears and eyes. This coloration is commonly seen\nin certain breeds, such as the Cavalier King Charles Spaniel.\n\n<details>\n\n- id: chatcmpl-9R8Vwtlu6aDEGQ8O7bZFk8rfT9FGL\n- choices: \\[Choice(finish_reason=\u2018stop\u2019, index=0, logprobs=None,\n  message=ChatCompletionMessage(content=\u2018The puppy has a predominantly\n  white coat with light brown patches, particularly around its ears and\n  eyes. This coloration is commonly seen in certain breeds, such as the\n  Cavalier King Charles Spaniel.\u2019, role=\u2018assistant\u2019, function_call=None,\n  tool_calls=None))\\]\n- created: 1716254612\n- model: gpt-4o-2024-05-13\n- object: chat.completion\n- system_fingerprint: fp_927397958d\n- usage: CompletionUsage(completion_tokens=38, prompt_tokens=393,\n  total_tokens=431)\n\n</details>\n\nNote that the image is passed in again for every input in the dialog, so\nthat number of input tokens increases quickly with this kind of chat.\n\n``` python\nchat.use\n```\n\n    In: 995; Out: 141; Total: 1136\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "A helper for using the OpenAI API",
    "version": "0.0.4",
    "project_urls": {
        "Homepage": "https://github.com/AnswerDotAI/cosette"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "225378195562cc347761c3aeca9a7610db41e5098291b806a0411cfbe3fb7ee0",
                "md5": "a9f3fee90f31f8be8f0920dbc0fbe914",
                "sha256": "75b7392fdf8b5b4da89497de10a96c2015208c7d206ebee286333f48e0cc0c83"
            },
            "downloads": -1,
            "filename": "cosette-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a9f3fee90f31f8be8f0920dbc0fbe914",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 14651,
            "upload_time": "2024-10-24T05:21:46",
            "upload_time_iso_8601": "2024-10-24T05:21:46.691713Z",
            "url": "https://files.pythonhosted.org/packages/22/53/78195562cc347761c3aeca9a7610db41e5098291b806a0411cfbe3fb7ee0/cosette-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "75687a391842b1c7b2435788ae1239ed799619ae3b8703cd6207cd273e63434b",
                "md5": "0fce4ff0528993ab25ecfb3857e70f68",
                "sha256": "18da88c169f0e2608e8061e2debb55e6a21b96563142c1bbbc938043d3b2acc2"
            },
            "downloads": -1,
            "filename": "cosette-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "0fce4ff0528993ab25ecfb3857e70f68",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 18875,
            "upload_time": "2024-10-24T05:21:48",
            "upload_time_iso_8601": "2024-10-24T05:21:48.348823Z",
            "url": "https://files.pythonhosted.org/packages/75/68/7a391842b1c7b2435788ae1239ed799619ae3b8703cd6207cd273e63434b/cosette-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-24 05:21:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AnswerDotAI",
    "github_project": "cosette",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "cosette"
}
        
Elapsed time: 0.45268s