Name | delos-cosmos JSON |
Version |
0.1.12
JSON |
| download |
home_page | None |
Summary | Cosmos client. |
upload_time | 2024-12-17 14:56:11 |
maintainer | None |
docs_url | None |
author | Maria |
requires_python | <4.0,>=3.11 |
license | None |
keywords |
ai
llm
generative
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Delos Cosmos
Cosmos client for interacting with the Cosmos API.
# Installation
To install the package, use `poetry`:
```bash
poetry add delos-cosmos
```
Or if you use the default `pip`:
```bash
pip install delos-cosmos
```
# Client Initialization
You can create an **API key** to access all services through the **Dashboard** in **CosmosPlatform**
`https://platform.cosmos-suite.ai`.
![API Key creation in Cosmos Platform](https://i.ibb.co/6mvm1hQ/api-key-create.png)
To create a `Cosmos` client instance, you need to initialize it with your API key:
```python
from cosmos import CosmosClient
client = CosmosClient("your-api-key")
```
# Endpoints
This `delos-cosmos` client provides access to the following endpoints:
**Status Endpoints**
- `status_health_request`: Check the health of the server.
**Translate Endpoints**
- `translate_text_request`: Translate text.
- `translate_file_request`: Translate a file.
**Web Endpoints**
- `web_search_request`: Perform a web search.
**LLM Endpoints**
- `chat`: Chat with the LLM.
- `embed`: Embed data into the LLM.
**Files Endpoints**
A **single file** can be read and parsed with the universal parser endpoint:
- `files_parse_request`: Parse a file to extract the pages, chunks or subchunks.
An **index** groups a set of files in order to be able to query them using natural language. There are several
operations regarding **index management**:
- `files_index_create_request`: Create an index.
- `files_index_add_files_request`: Add files to an index.
- `files_index_delete_files_request`: Delete files from an index.
- `files_index_delete_request`: Delete an index.
- `files_index_restore_request`: Restore a deleted index.
- `files_index_rename_request`: Rename an index.
And regarding **index querying**
- `files_index_ask_request`: Ask a question about the index documents (it requires that your `index.status.vectorized`
is set to `True`).
- `files_index_embed_request`: Embed or vectorize the index contents.
- `files_index_list_request`: List all indexes.
- `files_index_details_request`: Get details of an index.
These endpoints are accessible through `cosmos` client methods.
> ℹ️ **Info:** For all the **endpoints**, there are specific **parameters** that are required regarding the data to be
> sent to the API.
>
> Endpoints may expect `text` or `files` to operate with, the `output_language` for your result, the `index_uuid` that
> identifies the set of documents, the `model` to use for the LLM operations, etc.
>
> You can find the standardized parameters like the `return_type` for file translation and the `extract_type` for file
> parser in the appropiate endpoint.
---
## Status Endpoints
### Status Health Request
To **check the health** of the server and the validity of your API key:
```python
response = client.status_health_request()
if response:
print(f"Response: {response}")
```
---
## Translate Endpoints
### 1. Translate Text Request
To **translate text**, you can use the `translate_text_request` method:
```python
response = client.translate_text_request(
text="Hello, world!",
output_language="fr"
)
if response:
print(f"Translated Text: {response}")
```
### 2. Translate File Request
To **translate a file**, use the `translate_file_request` method:
```python
local_filepath_1 = Path("/path/to/file1.pdf")
response = client.translate_file_request(
filepath=local_filepath_1,
output_language="fr",
)
```
According to the type of file translation you prefer, you can choose the `return_type` parameter to:
| return_type | |
| ------------------ | --------------------------------------------------- |
| raw_text `Default` | Returns the translated text only |
| url | Return the translated file with its layout as a URL |
| file | Returns a FastaAPI FileResponse type |
> 💡 **Tip:** For faster and economical translations, set the `return_type` to `raw_text` to request to translate only
> the **text content**, without the file layout.
```python
local_filepath_1 = Path("/path/to/file1.pdf")
local_filepath_2 = Path("/path/to/file2.pdf")
# Set return_type='raw_text' -> only the translated text will be returned:
response = client.translate_file_request(
filepath=local_filepath_1,
output_language="fr",
return_type="raw_text"
)
# or return_type='url' -> returns a link to translated file with original file's layout:
response = client.translate_file_request(
filepath=local_filepath_2,
output_language="fr",
return_type="url"
)
if response:
print(f"Translated File Response: {response}")
```
---
## Web Endpoints
### Web Search Request
To perform a **web search**:
```python
response = client.web_search_request(text="What is the capital of France?")
# Or, if you want to specify the output_language and filter results
response = client.web_search_request(
text="What is the capital of France?",
output_language="fr"
)
if response:
print(f"Search Results: {response}")
```
---
## LLM Endpoints
LLM Endpoints provide a way to interact with several Large Language Models and Embedders in an unified way. Currently
supported `model`s are:
| Chat Models | Embedding Models |
| ------------------------- | -------------------- |
| _gpt-3.5_ `Legacy` | **ada-v2** `Default` |
| gpt-4-turbo | |
| gpt-4o | |
| **gpt-4o-mini** `Default` | |
| command-r | |
| command-r-plus | |
| llama-3-70b-instruct | |
| mistral-large | |
| mistral-small | |
### 1. Chat Request
To **chat** with the LLM:
```python
response = client.llm_chat_request(text="Hello, how are you?")
# Default model is handled, so that request is equivalent to:
response = client.llm_chat_request(
text="Hello, how are you?",
model="gpt-4o-mini"
)
if response:
print(f"Chat Response: {response}")
```
### 2. Embed Request
To **embed** some text using a LLM:
```python
response = client.llm_embed_request(text="Hello, how are you?")
if response:
print(f"Embed Response: {response}")
```
---
## Files Endpoints
### Universal Reader and Parser
The Universal reader and parser allows to open many textual **file** formats and extract the content in a **standarized
structure**. In order to parse a file:
```python
local_filepath_1 = Path("/path/to/file1.docx")
local_filepath_2 = Path("/path/to/file2.pdf")
response = client.files_parse_request(filepath=local_filepath_1)
if response:
print(f"Parsed File Response: {response}")
```
Previous request can be further contolled by providing the **optional parameters**:
```python
response = client.files_parse_request(
filepath=local_filepath_1,
extract_type=chunks,
k_min=500,
k_max=1000,
overlap=0,
filter_pages="[1,2]", # subset of pages to select
)
if response:
print(f"Parsed File Response: {response}")
```
| return_type | |
| ---------------- | ---------------------------------------------------------------------------------------------------------- |
| chunks `Default` | Returns the chunks of the file. You can custom its tokens size by setting `k_min`, `k_max`, `overlap` |
| subchunks | Returns the subchunks of the file (minimal blocks in the file, usually containing around 20 or 30 tokens). |
| pages | Returns the content of the file parsed as pages |
| file | Returns the the whole file contents |
> 💡 **Tip:** When using `extract_type=chunks`, you can define the `k_min`, `k_max` and `overlap` parameters to control
> the size of the chunks. Default values are `k_min=500`, `k_max=1200`, and `overlap=0`.
### Files Index
Index group a set of files in order to be able to query them using natural language. The **Index attributes** are:
| Attributes | Meaning |
| ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| index_uuid | Unique identifier of the index. It is randomly generated when the index is created and cannot be altered. |
| name | Human-friendly name for the index, can be modified through the `rename_index` endpoint. |
| created_at | Creation date |
| updated_at | Last operation performed in index |
| expires_at | Expiration date of the index. It will only be set once the `delete_index` request is explictly performed. (Default: None) |
| status | Status of the index. It will be `active`, and only when programmed for deletion it will be `countdown` (2h timeout before effective deletion). |
| vectorized | Boolean status of the index. When `True`, the index is ready to be queried. |
| files | List of files in the index. Contains their filehash, filename and size |
| storage | Storage details of the index: total size in bytes and MB, number of files. |
| |
The following **Index operations** are available:
- `INDEX_LIST`: List all indexes.
- `INDEX_DETAILS`: Get details of an index.
- `INDEX_CREATE`: Create a new index and parse files.
- `INDEX_ADD_FILES`: Add files to an existing index.
- `INDEX_DELETE_FILES`: Delete files from an index.
- `INDEX_DELETE`: Delete an index. **Warning**: _This is a delayed (2h) operation, allowed to be reverted with
`INDEX_RESTORE`. After 2h, the index will be **deleted and not recoverable**._
- `INDEX_RESTORE`: Restore a deleted index _(within the 2h after it was marked for deletion)_.
- `INDEX_EMBED`: Embed index contents.
- `INDEX_ASK`: Ask a question to the index. It requires that `INDEX_EMBED` is performed to allow index contents
querying.
### Files Index Requests
#### 1. Existing Index Overview
To **list all indexes** in your organization, files included and storage details:
```python
response = client.files_index_list_request()
if response:
print(f"List Indexes Response: {response}")
```
With **get details** of an index you can see the list of files in the index, their filehashes, their size, the `status`
of the index and the `vectorized` boolean status (find more details about the Index fields above):
```python
response = client.files_index_details_request(index_uuid="index-uuid")
if response:
print(f"Index Details Response: {response}")
```
#### 2. Index Management
To **create a new index** and parse files, provide the list of **filepaths** you want to parse:
```python
local_filepaths = [Path("/path/to/file1.docx"), Path("/path/to/file2.pdf")]
response = client.files_index_create_request(
filepaths=local_filepaths,
name="Cooking Recipes"
)
if response:
print(f"Index Create Response: {response}")
```
Let's say the new index has been created with the UUID `d55a285b-0a0d-4ba5-a918-857f63bc9063`. This UUID will be used in
the following requests, particularly in the `index_details` whenever some information about the index is needed.
You can **rename the index** with the `rename_index` method:
```python
index_uuid = "d55a285b-0a0d-4ba5-a918-857f63bc9063"
response = client.files_index_rename_request(
index_uuid=index_uuid,
name="Best Recipes"
)
if response:
print(f"Rename Index Response: {response}")
```
To **add files** to an existing index, provide the list of **filepaths** you want to add:
```python
index_uuid = "d55a285b-0a0d-4ba5-a918-857f63bc9063"
local_filepath_3 = [Path("/path/to/file3.txt")]
response = client.files_index_add_files_request(
index_uuid=index_uuid,
filepaths=local_filepath_3
)
if response:
print(f"Add Files to Index Response: {response}")
```
To **delete files** from an existing index, specify the **filehashes** of the files you want to delete:
```python
index_uuid = "d55a285b-0a0d-4ba5-a918-857f63bc9063"
filehashes_to_delete = ["2fa92ab4627c199a2827a363469bf4e513c67b758c34d1e316c2968ed68b9634"]
response = client.files_index_delete_files_request(
index_uuid=index_uuid,
files_hashes=filehashes_to_delete
)
if response:
print(f"Delete Files from Index Response: {response}")
```
To **delete an index** (it will be marked for deletion which will become effective **after 2h**):
```python
response = client.files_index_delete_request(index_uuid="index-to-delete-uuid")
if response:
print(f"Delete Index Response: {response}")
```
To **restore an index** marked for deletion (only possible during the 2h after the `INDEX_DELETE` was requested):
```python
response = client.files_index_restore_request(index_uuid="index-to-restore-uuid")
if response:
print(f"Restore Index Response: {response}")
```
#### 3. Index Querying
To **embed** or **vectorize index contents** in order to allow the query operations:
```python
response = client.files_index_embed_request(index_uuid="index-uuid")
if response:
print(f"Embed Index Response: {response}")
```
To **ask a question** about the index documents (it requires that your `index.status.vectorized` is set to `True`):
```python
response = client.files_index_ask_request(
index_uuid="index-uuid",
question="What is Cosmos?"
)
if response:
print(f"Ask Index Response: {response}")
```
## Requests Usage and Storage
All request responses show the **number of tokens** and **cost** consumed by the request. The **storage** for index
documents is **limited** up to your organization's quota and is shared between all indexes within your organization.
Contents **do not expire**, but they can be deleted by performing an explicit request through the API endpoints or
through the **CosmosPlatform** at `https://platform.cosmos-suite.ai/`.
In the **CosmosPlatform**, you can monitor the requests performed by your organization with your API Key and the files
stored in the Index Storage.
![API key usage in Cosmos Platform](https://i.ibb.co/VTD35z1/api-key-usage.png)
Through both the native requests towards Cosmos and the Python client, you can handle and delete files directly from the
Cosmos Platform.
Raw data
{
"_id": null,
"home_page": null,
"name": "delos-cosmos",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.11",
"maintainer_email": null,
"keywords": "AI, LLM, generative",
"author": "Maria",
"author_email": "mariaibanez@delosintelligence.fr",
"download_url": "https://files.pythonhosted.org/packages/8d/fd/eebe0d56a3bcae64e0dd35f496d7c730b48ae3c6811c7e5a97f7828b6aa5/delos_cosmos-0.1.12.tar.gz",
"platform": null,
"description": "# Delos Cosmos\n\nCosmos client for interacting with the Cosmos API.\n\n# Installation\n\nTo install the package, use `poetry`:\n\n```bash\npoetry add delos-cosmos\n```\n\nOr if you use the default `pip`:\n\n```bash\npip install delos-cosmos\n```\n\n# Client Initialization\n\nYou can create an **API key** to access all services through the **Dashboard** in **CosmosPlatform**\n`https://platform.cosmos-suite.ai`.\n\n![API Key creation in Cosmos Platform](https://i.ibb.co/6mvm1hQ/api-key-create.png)\n\nTo create a `Cosmos` client instance, you need to initialize it with your API key:\n\n```python\nfrom cosmos import CosmosClient\n\nclient = CosmosClient(\"your-api-key\")\n\n```\n\n# Endpoints\n\nThis `delos-cosmos` client provides access to the following endpoints:\n\n**Status Endpoints**\n\n- `status_health_request`: Check the health of the server.\n\n**Translate Endpoints**\n\n- `translate_text_request`: Translate text.\n- `translate_file_request`: Translate a file.\n\n**Web Endpoints**\n\n- `web_search_request`: Perform a web search.\n\n**LLM Endpoints**\n\n- `chat`: Chat with the LLM.\n- `embed`: Embed data into the LLM.\n\n**Files Endpoints**\n\nA **single file** can be read and parsed with the universal parser endpoint:\n\n- `files_parse_request`: Parse a file to extract the pages, chunks or subchunks.\n\nAn **index** groups a set of files in order to be able to query them using natural language. There are several\noperations regarding **index management**:\n\n- `files_index_create_request`: Create an index.\n- `files_index_add_files_request`: Add files to an index.\n- `files_index_delete_files_request`: Delete files from an index.\n- `files_index_delete_request`: Delete an index.\n- `files_index_restore_request`: Restore a deleted index.\n- `files_index_rename_request`: Rename an index.\n\nAnd regarding **index querying**\n\n- `files_index_ask_request`: Ask a question about the index documents (it requires that your `index.status.vectorized`\n is set to `True`).\n- `files_index_embed_request`: Embed or vectorize the index contents.\n- `files_index_list_request`: List all indexes.\n- `files_index_details_request`: Get details of an index.\n\nThese endpoints are accessible through `cosmos` client methods.\n\n> \u2139\ufe0f **Info:** For all the **endpoints**, there are specific **parameters** that are required regarding the data to be\n> sent to the API.\n>\n> Endpoints may expect `text` or `files` to operate with, the `output_language` for your result, the `index_uuid` that\n> identifies the set of documents, the `model` to use for the LLM operations, etc.\n>\n> You can find the standardized parameters like the `return_type` for file translation and the `extract_type` for file\n> parser in the appropiate endpoint.\n\n---\n\n## Status Endpoints\n\n### Status Health Request\n\nTo **check the health** of the server and the validity of your API key:\n\n```python\nresponse = client.status_health_request()\nif response:\n print(f\"Response: {response}\")\n```\n\n---\n\n## Translate Endpoints\n\n### 1. Translate Text Request\n\nTo **translate text**, you can use the `translate_text_request` method:\n\n```python\nresponse = client.translate_text_request(\n text=\"Hello, world!\",\n output_language=\"fr\"\n )\nif response:\n print(f\"Translated Text: {response}\")\n```\n\n### 2. Translate File Request\n\nTo **translate a file**, use the `translate_file_request` method:\n\n```python\nlocal_filepath_1 = Path(\"/path/to/file1.pdf\")\n\nresponse = client.translate_file_request(\n filepath=local_filepath_1,\n output_language=\"fr\",\n )\n```\n\nAccording to the type of file translation you prefer, you can choose the `return_type` parameter to:\n\n| return_type | |\n| ------------------ | --------------------------------------------------- |\n| raw_text `Default` | Returns the translated text only |\n| url | Return the translated file with its layout as a URL |\n| file | Returns a FastaAPI FileResponse type |\n\n> \ud83d\udca1 **Tip:** For faster and economical translations, set the `return_type` to `raw_text` to request to translate only\n> the **text content**, without the file layout.\n\n```python\nlocal_filepath_1 = Path(\"/path/to/file1.pdf\")\nlocal_filepath_2 = Path(\"/path/to/file2.pdf\")\n\n# Set return_type='raw_text' -> only the translated text will be returned:\nresponse = client.translate_file_request(\n filepath=local_filepath_1,\n output_language=\"fr\",\n return_type=\"raw_text\"\n )\n\n# or return_type='url' -> returns a link to translated file with original file's layout:\nresponse = client.translate_file_request(\n filepath=local_filepath_2,\n output_language=\"fr\",\n return_type=\"url\"\n )\n\nif response:\n print(f\"Translated File Response: {response}\")\n```\n\n---\n\n## Web Endpoints\n\n### Web Search Request\n\nTo perform a **web search**:\n\n```python\nresponse = client.web_search_request(text=\"What is the capital of France?\")\n\n# Or, if you want to specify the output_language and filter results\nresponse = client.web_search_request(\n text=\"What is the capital of France?\",\n output_language=\"fr\"\n )\nif response:\n print(f\"Search Results: {response}\")\n```\n\n---\n\n## LLM Endpoints\n\nLLM Endpoints provide a way to interact with several Large Language Models and Embedders in an unified way. Currently\nsupported `model`s are:\n\n| Chat Models | Embedding Models |\n| ------------------------- | -------------------- |\n| _gpt-3.5_ `Legacy` | **ada-v2** `Default` |\n| gpt-4-turbo | |\n| gpt-4o | |\n| **gpt-4o-mini** `Default` | |\n| command-r | |\n| command-r-plus | |\n| llama-3-70b-instruct | |\n| mistral-large | |\n| mistral-small | |\n\n### 1. Chat Request\n\nTo **chat** with the LLM:\n\n```python\nresponse = client.llm_chat_request(text=\"Hello, how are you?\")\n\n# Default model is handled, so that request is equivalent to:\nresponse = client.llm_chat_request(\n text=\"Hello, how are you?\",\n model=\"gpt-4o-mini\"\n )\nif response:\n print(f\"Chat Response: {response}\")\n```\n\n### 2. Embed Request\n\nTo **embed** some text using a LLM:\n\n```python\nresponse = client.llm_embed_request(text=\"Hello, how are you?\")\nif response:\n print(f\"Embed Response: {response}\")\n```\n\n---\n\n## Files Endpoints\n\n### Universal Reader and Parser\n\nThe Universal reader and parser allows to open many textual **file** formats and extract the content in a **standarized\nstructure**. In order to parse a file:\n\n```python\nlocal_filepath_1 = Path(\"/path/to/file1.docx\")\nlocal_filepath_2 = Path(\"/path/to/file2.pdf\")\n\nresponse = client.files_parse_request(filepath=local_filepath_1)\n\nif response:\n print(f\"Parsed File Response: {response}\")\n```\n\nPrevious request can be further contolled by providing the **optional parameters**:\n\n```python\nresponse = client.files_parse_request(\n filepath=local_filepath_1,\n extract_type=chunks,\n k_min=500,\n k_max=1000,\n overlap=0,\n filter_pages=\"[1,2]\", # subset of pages to select\n )\nif response:\n print(f\"Parsed File Response: {response}\")\n```\n\n| return_type | |\n| ---------------- | ---------------------------------------------------------------------------------------------------------- |\n| chunks `Default` | Returns the chunks of the file. You can custom its tokens size by setting `k_min`, `k_max`, `overlap` |\n| subchunks | Returns the subchunks of the file (minimal blocks in the file, usually containing around 20 or 30 tokens). |\n| pages | Returns the content of the file parsed as pages |\n| file | Returns the the whole file contents |\n\n> \ud83d\udca1 **Tip:** When using `extract_type=chunks`, you can define the `k_min`, `k_max` and `overlap` parameters to control\n> the size of the chunks. Default values are `k_min=500`, `k_max=1200`, and `overlap=0`.\n\n### Files Index\n\nIndex group a set of files in order to be able to query them using natural language. The **Index attributes** are:\n\n| Attributes | Meaning |\n| ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |\n| index_uuid | Unique identifier of the index. It is randomly generated when the index is created and cannot be altered. |\n| name | Human-friendly name for the index, can be modified through the `rename_index` endpoint. |\n| created_at | Creation date |\n| updated_at | Last operation performed in index |\n| expires_at | Expiration date of the index. It will only be set once the `delete_index` request is explictly performed. (Default: None) |\n| status | Status of the index. It will be `active`, and only when programmed for deletion it will be `countdown` (2h timeout before effective deletion). |\n| vectorized | Boolean status of the index. When `True`, the index is ready to be queried. |\n| files | List of files in the index. Contains their filehash, filename and size |\n| storage | Storage details of the index: total size in bytes and MB, number of files. |\n| |\n\nThe following **Index operations** are available:\n\n- `INDEX_LIST`: List all indexes.\n- `INDEX_DETAILS`: Get details of an index.\n- `INDEX_CREATE`: Create a new index and parse files.\n- `INDEX_ADD_FILES`: Add files to an existing index.\n- `INDEX_DELETE_FILES`: Delete files from an index.\n- `INDEX_DELETE`: Delete an index. **Warning**: _This is a delayed (2h) operation, allowed to be reverted with\n `INDEX_RESTORE`. After 2h, the index will be **deleted and not recoverable**._\n- `INDEX_RESTORE`: Restore a deleted index _(within the 2h after it was marked for deletion)_.\n- `INDEX_EMBED`: Embed index contents.\n- `INDEX_ASK`: Ask a question to the index. It requires that `INDEX_EMBED` is performed to allow index contents\n querying.\n\n### Files Index Requests\n\n#### 1. Existing Index Overview\n\nTo **list all indexes** in your organization, files included and storage details:\n\n```python\nresponse = client.files_index_list_request()\nif response:\n print(f\"List Indexes Response: {response}\")\n```\n\nWith **get details** of an index you can see the list of files in the index, their filehashes, their size, the `status`\nof the index and the `vectorized` boolean status (find more details about the Index fields above):\n\n```python\nresponse = client.files_index_details_request(index_uuid=\"index-uuid\")\nif response:\n print(f\"Index Details Response: {response}\")\n```\n\n#### 2. Index Management\n\nTo **create a new index** and parse files, provide the list of **filepaths** you want to parse:\n\n```python\nlocal_filepaths = [Path(\"/path/to/file1.docx\"), Path(\"/path/to/file2.pdf\")]\n\nresponse = client.files_index_create_request(\n filepaths=local_filepaths,\n name=\"Cooking Recipes\"\n )\nif response:\n print(f\"Index Create Response: {response}\")\n```\n\nLet's say the new index has been created with the UUID `d55a285b-0a0d-4ba5-a918-857f63bc9063`. This UUID will be used in\nthe following requests, particularly in the `index_details` whenever some information about the index is needed.\n\nYou can **rename the index** with the `rename_index` method:\n\n```python\nindex_uuid = \"d55a285b-0a0d-4ba5-a918-857f63bc9063\"\nresponse = client.files_index_rename_request(\n index_uuid=index_uuid,\n name=\"Best Recipes\"\n )\nif response:\n print(f\"Rename Index Response: {response}\")\n```\n\nTo **add files** to an existing index, provide the list of **filepaths** you want to add:\n\n```python\nindex_uuid = \"d55a285b-0a0d-4ba5-a918-857f63bc9063\"\nlocal_filepath_3 = [Path(\"/path/to/file3.txt\")]\n\nresponse = client.files_index_add_files_request(\n index_uuid=index_uuid,\n filepaths=local_filepath_3\n )\nif response:\n print(f\"Add Files to Index Response: {response}\")\n```\n\nTo **delete files** from an existing index, specify the **filehashes** of the files you want to delete:\n\n```python\nindex_uuid = \"d55a285b-0a0d-4ba5-a918-857f63bc9063\"\nfilehashes_to_delete = [\"2fa92ab4627c199a2827a363469bf4e513c67b758c34d1e316c2968ed68b9634\"]\n\nresponse = client.files_index_delete_files_request(\n index_uuid=index_uuid,\n files_hashes=filehashes_to_delete\n )\nif response:\n print(f\"Delete Files from Index Response: {response}\")\n```\n\nTo **delete an index** (it will be marked for deletion which will become effective **after 2h**):\n\n```python\nresponse = client.files_index_delete_request(index_uuid=\"index-to-delete-uuid\")\nif response:\n print(f\"Delete Index Response: {response}\")\n```\n\nTo **restore an index** marked for deletion (only possible during the 2h after the `INDEX_DELETE` was requested):\n\n```python\nresponse = client.files_index_restore_request(index_uuid=\"index-to-restore-uuid\")\nif response:\n print(f\"Restore Index Response: {response}\")\n```\n\n#### 3. Index Querying\n\nTo **embed** or **vectorize index contents** in order to allow the query operations:\n\n```python\nresponse = client.files_index_embed_request(index_uuid=\"index-uuid\")\nif response:\n print(f\"Embed Index Response: {response}\")\n```\n\nTo **ask a question** about the index documents (it requires that your `index.status.vectorized` is set to `True`):\n\n```python\nresponse = client.files_index_ask_request(\n index_uuid=\"index-uuid\",\n question=\"What is Cosmos?\"\n )\nif response:\n print(f\"Ask Index Response: {response}\")\n```\n\n## Requests Usage and Storage\n\nAll request responses show the **number of tokens** and **cost** consumed by the request. The **storage** for index\ndocuments is **limited** up to your organization's quota and is shared between all indexes within your organization.\nContents **do not expire**, but they can be deleted by performing an explicit request through the API endpoints or\nthrough the **CosmosPlatform** at `https://platform.cosmos-suite.ai/`.\n\nIn the **CosmosPlatform**, you can monitor the requests performed by your organization with your API Key and the files\nstored in the Index Storage.\n\n![API key usage in Cosmos Platform](https://i.ibb.co/VTD35z1/api-key-usage.png)\n\nThrough both the native requests towards Cosmos and the Python client, you can handle and delete files directly from the\nCosmos Platform.\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Cosmos client.",
"version": "0.1.12",
"project_urls": null,
"split_keywords": [
"ai",
" llm",
" generative"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ab19896730e2e4387772c4d2eada0302d33e6e36d58b07b9894726355775f869",
"md5": "174a37815b375c8adedb24448b4b05ea",
"sha256": "e92aab74570ddf97e8593198f0833ea7132fbb1b8264fc7f53ab62a8e8112fd1"
},
"downloads": -1,
"filename": "delos_cosmos-0.1.12-py3-none-any.whl",
"has_sig": false,
"md5_digest": "174a37815b375c8adedb24448b4b05ea",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.11",
"size": 11585,
"upload_time": "2024-12-17T14:56:09",
"upload_time_iso_8601": "2024-12-17T14:56:09.960255Z",
"url": "https://files.pythonhosted.org/packages/ab/19/896730e2e4387772c4d2eada0302d33e6e36d58b07b9894726355775f869/delos_cosmos-0.1.12-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8dfdeebe0d56a3bcae64e0dd35f496d7c730b48ae3c6811c7e5a97f7828b6aa5",
"md5": "98474930787e6a77fadc91d0f69dae06",
"sha256": "fcde6993d45bb64b311dd07b527e0bf9333acf60ea7deecd681393b8ccbc6366"
},
"downloads": -1,
"filename": "delos_cosmos-0.1.12.tar.gz",
"has_sig": false,
"md5_digest": "98474930787e6a77fadc91d0f69dae06",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.11",
"size": 15193,
"upload_time": "2024-12-17T14:56:11",
"upload_time_iso_8601": "2024-12-17T14:56:11.269612Z",
"url": "https://files.pythonhosted.org/packages/8d/fd/eebe0d56a3bcae64e0dd35f496d7c730b48ae3c6811c7e5a97f7828b6aa5/delos_cosmos-0.1.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-17 14:56:11",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "delos-cosmos"
}