pinecone-plugin-assistant


Namepinecone-plugin-assistant JSON
Version 0.3.0 PyPI version JSON
download
home_pagehttps://www.pinecone.io
SummaryAssistant plugin for Pinecone SDK
upload_time2024-10-01 12:57:53
maintainerNone
docs_urlNone
authorPinecone Systems, Inc.
requires_python<4.0,>=3.8
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Assistant

Interact with Pinecone's Assistant APIs, e.g. create, manage, and chat with assistants (currently in beta). Pinecone Assistant is also available in the [console](https://app.pinecone.io/).


## Quickstart
The following example highlights how to use an assistant to store and understand documents on a particular topic and chat with
the assistant about those documents with the ultimate goal of semantically understanding your data.

```python
from pinecone import Pinecone
from pinecone_plugins.assistant.models.chat import Message

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

# Create an assistant (in this case we'll store documents about planets)
space_assistant = pc.assistant.create_assistant(assistant_name="space")

# Upload information to your assistant
space_assistant.upload_file("./space-fun-facts.pdf")

# Once the upload succeeded, ask the assistant a question
msg = Message(content="How old is the earth?")
resp = space_assistant.chat_completions(messages=[msg])
print(resp)

# {'choices': [{'finish_reason': 'stop',
# 'index': 0,
# 'message': {'content': 'The age of the Earth is estimated to be '
#                         'about 4.54 billion years, based on '
#                         'evidence from radiometric age dating of '
#                         'meteorite material and Earth rocks, as '
#                         'well as lunar samples. This estimate has '
#                         'a margin of error of about 1%.',
#             'role': 'assistant'}}],
# 'id': '00000000000000001a377ceeaabf3c18',

```

## Assistants API

### Create Assistant
To create an assistant, see the below example. This API creates a assistant with the specified name, metadata, and optional timeout settings.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')
metadata = {"author": "Jane Doe", "version": "1.0"}

assistant = pc.assistant.create_assistant(
    assistant_name="example_assistant", 
    instructions="Always use British English spelling and vocabulary.",
    metadata=metadata,
    timeout=30
)
```

Arguments:
- `assistant_name` The name to assign to the assistant. 
    - type: `str`
- `instructions`  Custom instructions for the assistant. These will be applied to all future chat interactions.
    - type: `Optional[str] = None` 
- `metadata`: A dictionary containing metadata for the assistant.
    - type: `Optional[dict[str, any]] = None`
- `timeout`: Specify the number of seconds to wait until assistant operation is completed. 
    - If None, wait indefinitely until operation completes
    - If >=0, time out after this many seconds
    - If -1, return immediately and do not wait. 
    - type: `Optional[int] = None`

Returns:
- `AssistantModel` object with the following properties:
    - `name`: Contains the name of the assistant.
    - `instructions`  Custom instructions for the assistant.
    - `metadata`: Contains the provided metadata.
    - `created_at`: Contains the timestamp of when the assistant was created.
    - `updated_at`: Contains the timestamp of when the assistant was last updated.
    - `status`: Contains the status of the assistant. This is one of:
        - 'Initializing'
        - 'Ready'
        - 'Terminating'
        - 'Failed'

### Describe Assistant
The example below describes/fetches an assistant with the specified name. Will raise a 404 if no model exists with the specified name. There are two methods for this:

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.describe_assistant(
    assistant_name="example_assistant", 
)

# we can also do this
assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)
```
Arguments:
- `assistant_name`: The name of the assistant to fetch.
    - type: `str`, required

Returns:
- `AssistantModel` see [Create Assistant](#create-assistant)

### Update Assistant
To update an assistant's metadata and/or instructions, see the below example.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')
metadata = {"author": "Jane Doe", "version": "2.0"}

assistant = pc.assistant.update_assistant(
    assistant_name="example_assistant", 
    instructions="Always use Australian English spelling and vocabulary.",
    metadata=metadata,
)
```

Arguments:
- `assistant_name`: The name of the assistant to fetch.
    - type: `str`, required
- `instructions`  Custom instructions for the assistant. These will be applied to all future chat interactions.
    - type: `Optional[str] = None`
- `metadata`: A dictionary containing metadata for the assistant. If provided, it will completely replace the existing metadata unless set to `None` (default).
    - type: `Optional[dict[str, any]] = None`

Returns:
- `AssistantModel` see [Create Assistant](#create-assistant)


### List Assistants
Lists all assistants created from the current project. Will raise a 404 if no assistant exists with the specified name.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistants = pc.assistant.list_assistants()
```

Returns:
- `List[AssistantModel]` objects

### Delete Assistant
Deletes a assistant with the specified name. Will raise a 404 if no assistant exists with the specified name.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

pc.assistant.delete_assistant(
    assistant_name="example_assistant", 
)
```

Arguments:
- `assistant_name`: The name of the assistant to fetch.
    - type: `str`, required

Returns:
- NoneType


## Assistants Model API 

### Upload File to Assistant
Uploads a file from the specified path to this assistant for internal processing.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

# upload file
resp = assistant.upload_file(
    file_path="/path/to/file.txt"
    timeout=None
)
```

Arguments:        
- `file_path`: The path to the file that needs to be uploaded.
    - type: `str`, required

- `timeout`: Specify the number of seconds to wait until file processing is done.
    - If `None`, wait indefinitely.
    - If `>= 0`, time out after this many seconds.
    - If `-1`, return immediately and do not wait.
    - type: `Optional[int] = None`

Return
- `FileModel` object with the following properties:
    - `id`: The file id of the uploaded file.
    - `name`: The name of the uploaded file.
    - `created_on`: The timestamp of when the file was created.
    - `updated_on`: The timestamp of the last update to the file.
    - `metadata`: Metadata associated with the file.
    - `status`: The status of the file.

### Describe File to Assistant
Describes a file with the specified file `id` from this assistant. Includes information on its status and metadata.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

# describe file
file = assistant.describe_file(file_id="070513b3-022f-4966-b583-a9b12e0290ff")
```

Arguments:
- `file_id`: The file ID of the file to be described.
    - type: `str`, required

Returns:
- `FileModel` object with the following properties:
    - `id`: The UUID of the requested file.
    - `name`: The name of the requested file.
    - `created_on`: The timestamp of when the file was created.
    - `updated_on`: The timestamp of the last update to the file.
    - `metadata`: Metadata associated with the file.
    - `status`: The status of the file.

### List Files
Lists all uploaded files in this assistant.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

files = assistant.list_files()
```

Arguments:
None

Returns:
- `List[FileModel]`, the list of files in the assistant 


### Delete file from assistant
Deletes a file with the specified file_id from this assistant.

```python
from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

# delete file
assistant.delete_file(file_id="070513b3-022f-4966-b583-a9b12e0290ff")
```

Arguments:
- `file_id`: The file ID of the file to be described.
    - type: `str`, required

Returns:
- `NoneType`

### Chat
Performs a chat request to the following assistant which returns a stream of chat results in our custom format. Use this API if you want to have more control over the format of the citations. If the stream bool is set to `true`, this function will stream the response in chunks by returning a generator. 

```python
from pinecone import Pinecone
from pinecone_plugins.assistant.models.chat import Message

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

space_assistant = pc.assistant.Assistant(assistant_name="space")

msg = Message(content="How old is the earth?")
resp = space_assistant.chat(messages=[msg])

# The stream version
chunks = space_assistant.chat(messages=[msg], stream=True)

for chunk in chunks:
    if chunk:
        print(chunk)
```

Arguments:
- `messages`: The current context for the chat request. The final element in the list represents the user query to be made from this context.
    - type: `List[Message]` where `Message` requires the following:
        - `role`: `str`, the role of the context ('user' or 'agent')
        - `content`: `str`, the content of the context

- `stream`: If this flag is turned on, then the return type is an `Iterable[StreamingChatResultModel]` where data is returned as a generator/stream.
    - type: `bool`, default `false`

Return:
- The default result is a `ChatResultModel` with the following format:
    - `finish_reason`: The reason the response finished, e.g., "stop".
    - `index`: The index of the choice in the list.
    - `message`: An object with the following properties:
        - `content`: The content of the message.
        - `role`: The role of the message sender, e.g., "assistant".
    - `id`: The unique identifier of the chat completion.
    - `model`: The model used for the chat completion, e.g., "gpt-3.5-turbo-0613".
    - `citations`: A list of citations with the following structure:
        - `position`: The position of the citation in the document.
        - `references`: A list of references with the following structure:
            - `file`: A dictionary with the following properties:
                - `created_on`: The timestamp of when the file was created.
                - `id`: The file ID.
                - `name`: The name of the file.
                - `status`: The status of the file.
                - `updated_on`: The timestamp of the last update to the file.
            - `pages`: The list of pages that the citation references.
            
The default result is a ChatModel with the following format:
```json
{
    "finish_reason": "stop",
    "index": 0,
    "message": {
        "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.",
        "role": "assistant"
    },
    "id": "chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW",
    "model": "gpt-3.5-turbo-0613",
    "citations": [
        {
            "position": 3,
            "references": [
                {
                    "file": {
                        "created_on": "2024-06-02T19:48:00Z",
                        "id": "070513b3-022f-4966-b583-a9b12e0290ff",
                        "name": "tiny_file.txt",
                        "status": "Available",
                        "updated_on": "2024-06-02T19:48:00Z"
                    },
                    "pages": [1, 2, 3]
                }
            ],
        }
    ]
}
```
- When `stream` is set to `true`, the response is an iterable of `StreamingChatResultModel` objects with the following properties:
    - `choices`: A list with the following structure:
        - `finish_reason`: The reason the response finished, which can be `null` while streaming.
        - `index`: The index of the choice in the list.
        - `delta`: An object with the following properties:
            - `content`: The incremental content of the message.
            - `role`: The role of the message sender, which can be empty while streaming.
        - `logprobs`: The log probabilities (if applicable), otherwise `null`.
    - `id`: The unique identifier of the chat completion.
    - `model`: The model used for the chat completion, e.g., "gpt-4o-2024-05-13".

- However, when stream is set to true, the response is an stream of ChatResultModel's. This can be one of the following types:
    - StreamChatResultModelMessageStart:
        - `type`: The type of the message, which is "message_start".
        - `id`: The unique identifier of the message.
        - `model`: The model used for the chat completion, e.g., "gpt-4o-2024-05-13".
        - `role`: The role of the message sender, which is "assistant".

    Example:
    ```json
        {
            "type": "message_start",
            "id": "0000000000000000468323be9d266e55",
            "model": "gpt-4o-2024-05-13",
            "role": "assistant"
        }
    ```
    - StreamChatResultModelContentDelta
        - `type`: The type of the message, which is "content_chunk".
        - `id`: The unique identifier of the message.
        - `model`: The model used for the chat completion, e.g., "gpt-4o-2024-05-13".
        - `delta`: An object with the following properties:
            - `content`: The incremental content of the message. 
    ```json
        {
            "type": "content_chunk",
            "id": "0000000000000000468323be9d266e55",
            "model": "gpt-4o-2024-05-13",
            "delta": {
                "content": "The"
            }
        }
    ```
    - StreamChatResultModelCitation
        - `type`: The type of the message, which is "citation".
        - `id`: The unique identifier of the message.
        - `model`: The model used for the chat completion, e.g., "gpt-4o-2024-05-13".
        - `citation`: An object with the following properties:
            - `position`: The position of the citation in the document.
            - `references`: A list of references with the following structure:
                - `id`: The file ID.
                - `file`: A dictionary with the following properties:
                    - `status`: The status of the file.
                    - `id`: The file ID.
                    - `name`: The name of the file.
                    - `size`: The size of the file.
                    - `metadata`: The metadata of the file.
                    - `updated_on`: The timestamp of the last update to the file.
                    - `created_on`: The timestamp of when the file was created.
                    - `percent_done`: The percentage of the file that has been processed.
                    - `signed_url`: The signed URL of the file.
                - `pages`: The list of pages that the citation references.
    ```json
        {
            "type": "citation",
            "id": "0000000000000000116990b44044d21e",
            "model": "gpt-4o-2024-05-13",
            "citation": {
                "position": 247,
                "references": [{
                    "id": "s0",
                    "file": {
                        "status": "Available",
                        "id": "985edb6c-f649-4334-8f14-9a16b7039ab6",
                        "name": "PEPSICO_2022_10K.pdf",
                        "size": 2993516,
                        "metadata": {},
                        "updated_on": "2024-08-08T15:41:58.839846634Z",
                        "created_on": "2024-08-08T15:41:07.427879083Z",
                        "percent_done": 0,
                        "signed_url": "example.com"
                    },
                    "pages": [
                        32
                    ]
                }]
            }
        }
    ```
    - StreamChatResultModelMessageEnd
        - `type`: The type of the message, which is "message_end".
        - `id`: The unique identifier of the message.
        - `model`: The model used for the chat completion, e.g., "gpt-4o-2024-05-13".
        - `finish_reason`: The reason the response finished, e.g., "stop".
        - `usage`: An object with the following properties:
            - `prompt_tokens`: The number of prompt tokens used.
            - `completion_tokens`: The number of completion tokens used.
            - `total_tokens`: The total number of tokens used.
    ```json
        {
            "type": "message_end",
            "id": "0000000000000000116990b44044d21e",
            "model": "gpt-4o-2024-05-13",
            "finish_reason": "stop",
            "usage": {
                "prompt_tokens": 1,
                "completion_tokens": 1,
                "total_tokens": 2
            }
        }
    ```

### Chat Completions
Performs a chat completion request to the following assistant. If the stream bool is set to `true`, this function will stream the response in chunks by returning a generator. 


```python
from pinecone import Pinecone
from pinecone_plugins.assistant.models.chat import Message

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

space_assistant = pc.assistant.Assistant(assistant_name="space")

msg = Message(content="How old is the earth?")
resp = space_assistant.chat_completions(messages=[msg])

# The stream version
chunks = space_assistant.chat_completions(messages=[msg], stream=True)

for chunk in chunks:
    if chunk:
        print(chunk)
```

Arguments:
- `messages`: The current context for the chat request. The final element in the list represents the user query to be made from this context.
    - type: `List[Message]` where `Message` requires the following:
        - `role`: `str`, the role of the context ('user' or 'agent')
        - `content`: `str`, the content of the context

- `stream`: If this flag is turned on, then the return type is an `Iterable[StreamingChatResultModel]` where data is returned as a generator/stream.
    - type: `bool`, default `false`

Return:
- The default result is a `ChatResultModel` with the following format:
    - `choices`: A list with the following structure:
        - `finish_reason`: The reason the response finished, e.g., "stop".
        - `index`: The index of the choice in the list.
        - `message`: An object with the following properties:
            - `content`: The content of the message.
            - `role`: The role of the message sender, e.g., "assistant".
        - `logprobs`: The log probabilities (if applicable), otherwise `null`.
    - `id`: The unique identifier of the chat completion.
    - `model`: The model used for the chat completion, e.g., "gpt-3.5-turbo-0613".
    
See the example below
```json
{
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "message": {
                "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.",
                "role": "assistant"
            },
            "logprobs": null
        }
    ],
    "id": "00000000000000005c12d4d71263b642",
    "model": "space"
}
```
- When `stream` is set to `true`, the response is an iterable of `StreamingChatResultModel` objects with the following properties:
    - `choices`: A list with the following structure:
        - `finish_reason`: The reason the response finished, which can be `null` while streaming.
        - `index`: The index of the choice in the list.
        - `delta`: An object with the following properties:
            - `content`: The incremental content of the message.
            - `role`: The role of the message sender, which can be empty while streaming.
        - `logprobs`: The log probabilities (if applicable), otherwise `null`.
    - `id`: The unique identifier of the chat completion.
    - `model`: The model used for the chat completion, e.g., "gpt-3.5-turbo-0613".

See the example below
```json
    {
        "choices": [
            {
                "finish_reason": null,
                "index": 0,
                "delta": {
                    "content": "The",
                    "role": ""
                },
                "logprobs": null
            }
        ],
        "id": "00000000000000005d487d0ba0cde006",
        "model": "space"
    }
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://www.pinecone.io",
    "name": "pinecone-plugin-assistant",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Pinecone Systems, Inc.",
    "author_email": "support@pinecone.io",
    "download_url": "https://files.pythonhosted.org/packages/dc/32/b6f7d4b9cf1edfd9e8d57d554c027bc0989109e122b853b4fa58f93cd777/pinecone_plugin_assistant-0.3.0.tar.gz",
    "platform": null,
    "description": "# Assistant\n\nInteract with Pinecone's Assistant APIs, e.g. create, manage, and chat with assistants (currently in beta). Pinecone Assistant is also available in the [console](https://app.pinecone.io/).\n\n\n## Quickstart\nThe following example highlights how to use an assistant to store and understand documents on a particular topic and chat with\nthe assistant about those documents with the ultimate goal of semantically understanding your data.\n\n```python\nfrom pinecone import Pinecone\nfrom pinecone_plugins.assistant.models.chat import Message\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\n# Create an assistant (in this case we'll store documents about planets)\nspace_assistant = pc.assistant.create_assistant(assistant_name=\"space\")\n\n# Upload information to your assistant\nspace_assistant.upload_file(\"./space-fun-facts.pdf\")\n\n# Once the upload succeeded, ask the assistant a question\nmsg = Message(content=\"How old is the earth?\")\nresp = space_assistant.chat_completions(messages=[msg])\nprint(resp)\n\n# {'choices': [{'finish_reason': 'stop',\n# 'index': 0,\n# 'message': {'content': 'The age of the Earth is estimated to be '\n#                         'about 4.54 billion years, based on '\n#                         'evidence from radiometric age dating of '\n#                         'meteorite material and Earth rocks, as '\n#                         'well as lunar samples. This estimate has '\n#                         'a margin of error of about 1%.',\n#             'role': 'assistant'}}],\n# 'id': '00000000000000001a377ceeaabf3c18',\n\n```\n\n## Assistants API\n\n### Create Assistant\nTo create an assistant, see the below example. This API creates a assistant with the specified name, metadata, and optional timeout settings.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\nmetadata = {\"author\": \"Jane Doe\", \"version\": \"1.0\"}\n\nassistant = pc.assistant.create_assistant(\n    assistant_name=\"example_assistant\", \n    instructions=\"Always use British English spelling and vocabulary.\",\n    metadata=metadata,\n    timeout=30\n)\n```\n\nArguments:\n- `assistant_name` The name to assign to the assistant. \n    - type: `str`\n- `instructions`  Custom instructions for the assistant. These will be applied to all future chat interactions.\n    - type: `Optional[str] = None` \n- `metadata`: A dictionary containing metadata for the assistant.\n    - type: `Optional[dict[str, any]] = None`\n- `timeout`: Specify the number of seconds to wait until assistant operation is completed. \n    - If None, wait indefinitely until operation completes\n    - If >=0, time out after this many seconds\n    - If -1, return immediately and do not wait. \n    - type: `Optional[int] = None`\n\nReturns:\n- `AssistantModel` object with the following properties:\n    - `name`: Contains the name of the assistant.\n    - `instructions`  Custom instructions for the assistant.\n    - `metadata`: Contains the provided metadata.\n    - `created_at`: Contains the timestamp of when the assistant was created.\n    - `updated_at`: Contains the timestamp of when the assistant was last updated.\n    - `status`: Contains the status of the assistant. This is one of:\n        - 'Initializing'\n        - 'Ready'\n        - 'Terminating'\n        - 'Failed'\n\n### Describe Assistant\nThe example below describes/fetches an assistant with the specified name. Will raise a 404 if no model exists with the specified name. There are two methods for this:\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nassistant = pc.assistant.describe_assistant(\n    assistant_name=\"example_assistant\", \n)\n\n# we can also do this\nassistant = pc.assistant.Assistant(\n    assistant_name=\"example_assistant\", \n)\n```\nArguments:\n- `assistant_name`: The name of the assistant to fetch.\n    - type: `str`, required\n\nReturns:\n- `AssistantModel` see [Create Assistant](#create-assistant)\n\n### Update Assistant\nTo update an assistant's metadata and/or instructions, see the below example.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\nmetadata = {\"author\": \"Jane Doe\", \"version\": \"2.0\"}\n\nassistant = pc.assistant.update_assistant(\n    assistant_name=\"example_assistant\", \n    instructions=\"Always use Australian English spelling and vocabulary.\",\n    metadata=metadata,\n)\n```\n\nArguments:\n- `assistant_name`: The name of the assistant to fetch.\n    - type: `str`, required\n- `instructions`  Custom instructions for the assistant. These will be applied to all future chat interactions.\n    - type: `Optional[str] = None`\n- `metadata`: A dictionary containing metadata for the assistant. If provided, it will completely replace the existing metadata unless set to `None` (default).\n    - type: `Optional[dict[str, any]] = None`\n\nReturns:\n- `AssistantModel` see [Create Assistant](#create-assistant)\n\n\n### List Assistants\nLists all assistants created from the current project. Will raise a 404 if no assistant exists with the specified name.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nassistants = pc.assistant.list_assistants()\n```\n\nReturns:\n- `List[AssistantModel]` objects\n\n### Delete Assistant\nDeletes a assistant with the specified name. Will raise a 404 if no assistant exists with the specified name.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\npc.assistant.delete_assistant(\n    assistant_name=\"example_assistant\", \n)\n```\n\nArguments:\n- `assistant_name`: The name of the assistant to fetch.\n    - type: `str`, required\n\nReturns:\n- NoneType\n\n\n## Assistants Model API \n\n### Upload File to Assistant\nUploads a file from the specified path to this assistant for internal processing.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nassistant = pc.assistant.Assistant(\n    assistant_name=\"example_assistant\", \n)\n\n# upload file\nresp = assistant.upload_file(\n    file_path=\"/path/to/file.txt\"\n    timeout=None\n)\n```\n\nArguments:        \n- `file_path`: The path to the file that needs to be uploaded.\n    - type: `str`, required\n\n- `timeout`: Specify the number of seconds to wait until file processing is done.\n    - If `None`, wait indefinitely.\n    - If `>= 0`, time out after this many seconds.\n    - If `-1`, return immediately and do not wait.\n    - type: `Optional[int] = None`\n\nReturn\n- `FileModel` object with the following properties:\n    - `id`: The file id of the uploaded file.\n    - `name`: The name of the uploaded file.\n    - `created_on`: The timestamp of when the file was created.\n    - `updated_on`: The timestamp of the last update to the file.\n    - `metadata`: Metadata associated with the file.\n    - `status`: The status of the file.\n\n### Describe File to Assistant\nDescribes a file with the specified file `id` from this assistant. Includes information on its status and metadata.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nassistant = pc.assistant.Assistant(\n    assistant_name=\"example_assistant\", \n)\n\n# describe file\nfile = assistant.describe_file(file_id=\"070513b3-022f-4966-b583-a9b12e0290ff\")\n```\n\nArguments:\n- `file_id`: The file ID of the file to be described.\n    - type: `str`, required\n\nReturns:\n- `FileModel` object with the following properties:\n    - `id`: The UUID of the requested file.\n    - `name`: The name of the requested file.\n    - `created_on`: The timestamp of when the file was created.\n    - `updated_on`: The timestamp of the last update to the file.\n    - `metadata`: Metadata associated with the file.\n    - `status`: The status of the file.\n\n### List Files\nLists all uploaded files in this assistant.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nassistant = pc.assistant.Assistant(\n    assistant_name=\"example_assistant\", \n)\n\nfiles = assistant.list_files()\n```\n\nArguments:\nNone\n\nReturns:\n- `List[FileModel]`, the list of files in the assistant \n\n\n### Delete file from assistant\nDeletes a file with the specified file_id from this assistant.\n\n```python\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nassistant = pc.assistant.Assistant(\n    assistant_name=\"example_assistant\", \n)\n\n# delete file\nassistant.delete_file(file_id=\"070513b3-022f-4966-b583-a9b12e0290ff\")\n```\n\nArguments:\n- `file_id`: The file ID of the file to be described.\n    - type: `str`, required\n\nReturns:\n- `NoneType`\n\n### Chat\nPerforms a chat request to the following assistant which returns a stream of chat results in our custom format. Use this API if you want to have more control over the format of the citations. If the stream bool is set to `true`, this function will stream the response in chunks by returning a generator. \n\n```python\nfrom pinecone import Pinecone\nfrom pinecone_plugins.assistant.models.chat import Message\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nspace_assistant = pc.assistant.Assistant(assistant_name=\"space\")\n\nmsg = Message(content=\"How old is the earth?\")\nresp = space_assistant.chat(messages=[msg])\n\n# The stream version\nchunks = space_assistant.chat(messages=[msg], stream=True)\n\nfor chunk in chunks:\n    if chunk:\n        print(chunk)\n```\n\nArguments:\n- `messages`: The current context for the chat request. The final element in the list represents the user query to be made from this context.\n    - type: `List[Message]` where `Message` requires the following:\n        - `role`: `str`, the role of the context ('user' or 'agent')\n        - `content`: `str`, the content of the context\n\n- `stream`: If this flag is turned on, then the return type is an `Iterable[StreamingChatResultModel]` where data is returned as a generator/stream.\n    - type: `bool`, default `false`\n\nReturn:\n- The default result is a `ChatResultModel` with the following format:\n    - `finish_reason`: The reason the response finished, e.g., \"stop\".\n    - `index`: The index of the choice in the list.\n    - `message`: An object with the following properties:\n        - `content`: The content of the message.\n        - `role`: The role of the message sender, e.g., \"assistant\".\n    - `id`: The unique identifier of the chat completion.\n    - `model`: The model used for the chat completion, e.g., \"gpt-3.5-turbo-0613\".\n    - `citations`: A list of citations with the following structure:\n        - `position`: The position of the citation in the document.\n        - `references`: A list of references with the following structure:\n            - `file`: A dictionary with the following properties:\n                - `created_on`: The timestamp of when the file was created.\n                - `id`: The file ID.\n                - `name`: The name of the file.\n                - `status`: The status of the file.\n                - `updated_on`: The timestamp of the last update to the file.\n            - `pages`: The list of pages that the citation references.\n            \nThe default result is a ChatModel with the following format:\n```json\n{\n    \"finish_reason\": \"stop\",\n    \"index\": 0,\n    \"message\": {\n        \"content\": \"The 2020 World Series was played in Texas at Globe Life Field in Arlington.\",\n        \"role\": \"assistant\"\n    },\n    \"id\": \"chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW\",\n    \"model\": \"gpt-3.5-turbo-0613\",\n    \"citations\": [\n        {\n            \"position\": 3,\n            \"references\": [\n                {\n                    \"file\": {\n                        \"created_on\": \"2024-06-02T19:48:00Z\",\n                        \"id\": \"070513b3-022f-4966-b583-a9b12e0290ff\",\n                        \"name\": \"tiny_file.txt\",\n                        \"status\": \"Available\",\n                        \"updated_on\": \"2024-06-02T19:48:00Z\"\n                    },\n                    \"pages\": [1, 2, 3]\n                }\n            ],\n        }\n    ]\n}\n```\n- When `stream` is set to `true`, the response is an iterable of `StreamingChatResultModel` objects with the following properties:\n    - `choices`: A list with the following structure:\n        - `finish_reason`: The reason the response finished, which can be `null` while streaming.\n        - `index`: The index of the choice in the list.\n        - `delta`: An object with the following properties:\n            - `content`: The incremental content of the message.\n            - `role`: The role of the message sender, which can be empty while streaming.\n        - `logprobs`: The log probabilities (if applicable), otherwise `null`.\n    - `id`: The unique identifier of the chat completion.\n    - `model`: The model used for the chat completion, e.g., \"gpt-4o-2024-05-13\".\n\n- However, when stream is set to true, the response is an stream of ChatResultModel's. This can be one of the following types:\n    - StreamChatResultModelMessageStart:\n        - `type`: The type of the message, which is \"message_start\".\n        - `id`: The unique identifier of the message.\n        - `model`: The model used for the chat completion, e.g., \"gpt-4o-2024-05-13\".\n        - `role`: The role of the message sender, which is \"assistant\".\n\n    Example:\n    ```json\n        {\n            \"type\": \"message_start\",\n            \"id\": \"0000000000000000468323be9d266e55\",\n            \"model\": \"gpt-4o-2024-05-13\",\n            \"role\": \"assistant\"\n        }\n    ```\n    - StreamChatResultModelContentDelta\n        - `type`: The type of the message, which is \"content_chunk\".\n        - `id`: The unique identifier of the message.\n        - `model`: The model used for the chat completion, e.g., \"gpt-4o-2024-05-13\".\n        - `delta`: An object with the following properties:\n            - `content`: The incremental content of the message. \n    ```json\n        {\n            \"type\": \"content_chunk\",\n            \"id\": \"0000000000000000468323be9d266e55\",\n            \"model\": \"gpt-4o-2024-05-13\",\n            \"delta\": {\n                \"content\": \"The\"\n            }\n        }\n    ```\n    - StreamChatResultModelCitation\n        - `type`: The type of the message, which is \"citation\".\n        - `id`: The unique identifier of the message.\n        - `model`: The model used for the chat completion, e.g., \"gpt-4o-2024-05-13\".\n        - `citation`: An object with the following properties:\n            - `position`: The position of the citation in the document.\n            - `references`: A list of references with the following structure:\n                - `id`: The file ID.\n                - `file`: A dictionary with the following properties:\n                    - `status`: The status of the file.\n                    - `id`: The file ID.\n                    - `name`: The name of the file.\n                    - `size`: The size of the file.\n                    - `metadata`: The metadata of the file.\n                    - `updated_on`: The timestamp of the last update to the file.\n                    - `created_on`: The timestamp of when the file was created.\n                    - `percent_done`: The percentage of the file that has been processed.\n                    - `signed_url`: The signed URL of the file.\n                - `pages`: The list of pages that the citation references.\n    ```json\n        {\n            \"type\": \"citation\",\n            \"id\": \"0000000000000000116990b44044d21e\",\n            \"model\": \"gpt-4o-2024-05-13\",\n            \"citation\": {\n                \"position\": 247,\n                \"references\": [{\n                    \"id\": \"s0\",\n                    \"file\": {\n                        \"status\": \"Available\",\n                        \"id\": \"985edb6c-f649-4334-8f14-9a16b7039ab6\",\n                        \"name\": \"PEPSICO_2022_10K.pdf\",\n                        \"size\": 2993516,\n                        \"metadata\": {},\n                        \"updated_on\": \"2024-08-08T15:41:58.839846634Z\",\n                        \"created_on\": \"2024-08-08T15:41:07.427879083Z\",\n                        \"percent_done\": 0,\n                        \"signed_url\": \"example.com\"\n                    },\n                    \"pages\": [\n                        32\n                    ]\n                }]\n            }\n        }\n    ```\n    - StreamChatResultModelMessageEnd\n        - `type`: The type of the message, which is \"message_end\".\n        - `id`: The unique identifier of the message.\n        - `model`: The model used for the chat completion, e.g., \"gpt-4o-2024-05-13\".\n        - `finish_reason`: The reason the response finished, e.g., \"stop\".\n        - `usage`: An object with the following properties:\n            - `prompt_tokens`: The number of prompt tokens used.\n            - `completion_tokens`: The number of completion tokens used.\n            - `total_tokens`: The total number of tokens used.\n    ```json\n        {\n            \"type\": \"message_end\",\n            \"id\": \"0000000000000000116990b44044d21e\",\n            \"model\": \"gpt-4o-2024-05-13\",\n            \"finish_reason\": \"stop\",\n            \"usage\": {\n                \"prompt_tokens\": 1,\n                \"completion_tokens\": 1,\n                \"total_tokens\": 2\n            }\n        }\n    ```\n\n### Chat Completions\nPerforms a chat completion request to the following assistant. If the stream bool is set to `true`, this function will stream the response in chunks by returning a generator. \n\n\n```python\nfrom pinecone import Pinecone\nfrom pinecone_plugins.assistant.models.chat import Message\n\npc = Pinecone(api_key='<<PINECONE_API_KEY>>')\n\nspace_assistant = pc.assistant.Assistant(assistant_name=\"space\")\n\nmsg = Message(content=\"How old is the earth?\")\nresp = space_assistant.chat_completions(messages=[msg])\n\n# The stream version\nchunks = space_assistant.chat_completions(messages=[msg], stream=True)\n\nfor chunk in chunks:\n    if chunk:\n        print(chunk)\n```\n\nArguments:\n- `messages`: The current context for the chat request. The final element in the list represents the user query to be made from this context.\n    - type: `List[Message]` where `Message` requires the following:\n        - `role`: `str`, the role of the context ('user' or 'agent')\n        - `content`: `str`, the content of the context\n\n- `stream`: If this flag is turned on, then the return type is an `Iterable[StreamingChatResultModel]` where data is returned as a generator/stream.\n    - type: `bool`, default `false`\n\nReturn:\n- The default result is a `ChatResultModel` with the following format:\n    - `choices`: A list with the following structure:\n        - `finish_reason`: The reason the response finished, e.g., \"stop\".\n        - `index`: The index of the choice in the list.\n        - `message`: An object with the following properties:\n            - `content`: The content of the message.\n            - `role`: The role of the message sender, e.g., \"assistant\".\n        - `logprobs`: The log probabilities (if applicable), otherwise `null`.\n    - `id`: The unique identifier of the chat completion.\n    - `model`: The model used for the chat completion, e.g., \"gpt-3.5-turbo-0613\".\n    \nSee the example below\n```json\n{\n    \"choices\": [\n        {\n            \"finish_reason\": \"stop\",\n            \"index\": 0,\n            \"message\": {\n                \"content\": \"The 2020 World Series was played in Texas at Globe Life Field in Arlington.\",\n                \"role\": \"assistant\"\n            },\n            \"logprobs\": null\n        }\n    ],\n    \"id\": \"00000000000000005c12d4d71263b642\",\n    \"model\": \"space\"\n}\n```\n- When `stream` is set to `true`, the response is an iterable of `StreamingChatResultModel` objects with the following properties:\n    - `choices`: A list with the following structure:\n        - `finish_reason`: The reason the response finished, which can be `null` while streaming.\n        - `index`: The index of the choice in the list.\n        - `delta`: An object with the following properties:\n            - `content`: The incremental content of the message.\n            - `role`: The role of the message sender, which can be empty while streaming.\n        - `logprobs`: The log probabilities (if applicable), otherwise `null`.\n    - `id`: The unique identifier of the chat completion.\n    - `model`: The model used for the chat completion, e.g., \"gpt-3.5-turbo-0613\".\n\nSee the example below\n```json\n    {\n        \"choices\": [\n            {\n                \"finish_reason\": null,\n                \"index\": 0,\n                \"delta\": {\n                    \"content\": \"The\",\n                    \"role\": \"\"\n                },\n                \"logprobs\": null\n            }\n        ],\n        \"id\": \"00000000000000005d487d0ba0cde006\",\n        \"model\": \"space\"\n    }\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Assistant plugin for Pinecone SDK",
    "version": "0.3.0",
    "project_urls": {
        "Documentation": "https://pinecone.io/docs",
        "Homepage": "https://www.pinecone.io"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "541b9769f0db11a28860b7d456e3f29afed7c7473c3674533332cb8c16434516",
                "md5": "3c76a8a751552b3bf580174d48cf02ba",
                "sha256": "77633dd23bfc95cc3333d1df4156d3c1cba37b5c1ea0ff81639153b7f5b31fb8"
            },
            "downloads": -1,
            "filename": "pinecone_plugin_assistant-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3c76a8a751552b3bf580174d48cf02ba",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 218962,
            "upload_time": "2024-10-01T12:57:52",
            "upload_time_iso_8601": "2024-10-01T12:57:52.273279Z",
            "url": "https://files.pythonhosted.org/packages/54/1b/9769f0db11a28860b7d456e3f29afed7c7473c3674533332cb8c16434516/pinecone_plugin_assistant-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dc32b6f7d4b9cf1edfd9e8d57d554c027bc0989109e122b853b4fa58f93cd777",
                "md5": "161ababfdd3690ad893f6e2e81e3c378",
                "sha256": "50fa61a7d4199800a1f99303cc39ee82fa3f43ecc0268982c6deaebe8290bf59"
            },
            "downloads": -1,
            "filename": "pinecone_plugin_assistant-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "161ababfdd3690ad893f6e2e81e3c378",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 136585,
            "upload_time": "2024-10-01T12:57:53",
            "upload_time_iso_8601": "2024-10-01T12:57:53.601857Z",
            "url": "https://files.pythonhosted.org/packages/dc/32/b6f7d4b9cf1edfd9e8d57d554c027bc0989109e122b853b4fa58f93cd777/pinecone_plugin_assistant-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-01 12:57:53",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "pinecone-plugin-assistant"
}
        
Elapsed time: 0.39561s