vipas


Namevipas JSON
Version 1.0.7 PyPI version JSON
download
home_pagehttps://github.com/vipas-engineering/vipas-python-sdk
SummaryPython SDK for Vipas AI Platform
upload_time2024-11-25 07:07:40
maintainerNone
docs_urlNone
authorVipas Team
requires_python>=3.7
license Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # VIPAS AI Platform SDK
The Vipas AI Python SDK provides a simple and intuitive interface to interact with the Vipas AI platform. This SDK allows you to easily make predictions using pre-trained models hosted on the Vipas AI platform.

## Table of Contents

- [VIPAS AI Platform SDK](#vipas-ai-platform-sdk)
  - [Requirements](#requirements)
  - [Installation & Usage](#installation--usage)
    - [pip install](#pip-install)
  - [Prerequisites](#prerequisites)
    - [Step 1: Fetching the Auth Token](#step-1-fetching-the-auth-token)
    - [Step 2: Setting the Auth Token as an Environment Variable](#step-2-setting-the-auth-token-as-an-environment-variable)
  - [Getting Started](#getting-started)
    - [Basic Usage](#basic-usage)
    - [Handling Exceptions](#handling-exceptions)
  - [Asynchronous Inference Mode](#asynchronous-inference-mode)
    - [Asynchronous Inference Mode Example](#asynchronous-inference-mode-example)
  - [Real-Time Inference Mode](#real-time-inference-mode)
    - [Real-Time Inference Mode Example](#real-time-inference-mode-example)
  - [Creating Model on Vipas.AI Platform](#create-model-on-vipas.ai-platform)
  - [Publishing Model](#publishing-model)
    - [Publishing Process Overview](#publishing-process-overview)
  - [Retrieving Model Deployment Logs with the Vipas.AI SDK](#retrieving-model-deployment-logs-with-the-vipas.AI-SDK)
  - [Evaluating a Model against a Challenge](#evaluating-a-model-against-a-challenge)
  - [Listing the submissions of a Challenge](#listing-the-submissions-of-a-challenge)
  - [Logging](#logging)
    - [LoggerClient Usage](#loggerclient-usage)
    - [Example of LoggerClient](#example-of-loggerclient)
  - [License](#license)

## Requirements.

Python 3.7+

## Installation & Usage
### pip install

You can install vipas sdk from the pip repository, using the following command:

```sh
pip install vipas
```
(you may need to run `pip` with root permission: `sudo pip install vipas`)

Then import the package:
```python
import vipas
```
## Prerequisites
Before using the Vipas.AI SDK to manage and publish models, you need to fetch your VPS Auth Token from the Vipas.AI platform and configure it as an environment variable.

#### Step 1: Fetching the Auth Token:-
This section explains how to fetch the VPS Auth Token required to authenticate your SDK requests. You can use one of two methods to obtain the token.

##### Method 1: Using the Vipas.AI Platform

1. **Login to Vipas.AI**:  
   Visit the [Vipas.AI platform](https://vipas.ai/) and log in to your account.

2. **Access Settings**:  
   Click on your user profile icon in the top-right corner and navigate to the **Settings** page.

3. **Generate the Token**:  
   Locate the **Temporary Access Token** section, enter your password, and click the button to generate a new token.

4. **Copy the Token**:  
   Copy the generated token, as you will need it to configure the SDK.

---

##### Method 2: Using the `generate_token` SDK Function

---


The Vipas.AI SDK allows users to programmatically retrieve their authentication token using the `generate_token` function. This token is essential for authenticating SDK requests and ensuring secure access to the platform.

---

### Function Signature

```python
vipas.user.UserClient.generate_token(username: str, password: str) → Dict[str, Any]
```

---

### Parameters

- **`username` (str)**:  
  The registered username of your Vipas.AI account.  
  - **Constraints**: Required field, must be a valid username.

- **`password` (str)**:  
  The registered password of your Vipas.AI account.  
  - **Constraints**: Required field, must be a valid password.

---

### Return Value

The `generate_token` function returns a dictionary containing the generated token:
- **`vps_auth_token`**: The generated authentication token.

---

### Example Usage

Here’s how you can use the `generate_token` function to generate an authentication token:

```python
from vipas.user import UserClient
from vipas.exceptions import ClientException

try:
    # Define user credentials
    username = "your_username"
    password = "your_password"

    # Create a UserClient instance
    user_client = UserClient()

    # Generate the Auth Token
    auth_response = user_client.generate_token(username=username, password=password)

    # Extract and print the token
    auth_token = auth_response.get('vps_auth_token')
    print(f"Authentication Token: {auth_token}")

except ClientException as e:
    print(f"Error generating token: {e}")
except Exception as e:
    print(f"Unexpected error: {e}")
```

---

### Handling the Response

The response from the `generate_token` function is structured as follows:

```json
{
  "vps_auth_token": "<Your generated authentication token>"
}
```

This response provides the token that can be set as an environment variable as mentioned further steps,.

---

### Error Handling

The `generate_token` function raises exceptions for various error scenarios:

| Exception                                | Description                                                                            |
|------------------------------------------|----------------------------------------------------------------------------------------|
| **`vipas.exceptions.ClientException`**  | Raised when the provided username or password is incorrect.                            |
| **`vipas.exceptions.UnauthorizedException`** | Raised if the authentication request is unauthorized (e.g., invalid credentials).   |


---

By leveraging the `generate_token` function, users can efficiently authenticate their SDK requests and securely interact with the Vipas.AI platform.

#### Step 2: Setting the Auth Token as an Environment Variable:-
You need to set the VPS_AUTH_TOKEN as an environment variable to use it within your SDK.

##### For linux and macOS
1. Open a **Terminal**.
2. Run the following command to export the token:

    ```bash
    export VPS_AUTH_TOKEN=<TOKEN>
    ```
   Replace <TOKEN> with the actual token you copied from the Vipas.AI UI.
3. To make it persistent across sessions, add the following line to your **~/.bashrc, ~/.zshrc**, or the corresponding shell configuration file

    ```bash
    export VPS_AUTH_TOKEN=<TOKEN>
    ```
    Then use this command to source it to the current running 
    session
    ```bash
    source ~/.bashrc.
    ```
##### For Windows
1. Open **Command Prompt** or **PowerShell**.
2. Run the following command to set the token for the current session:
    ```powershell
    set VPS_AUTH_TOKEN=<TOKEN>
    ```
3. To set it permanently, follow these steps:
    1. Open the Start menu, search for **Environment Variables**, and open the **Edit the system environment variables** option.
    2. In the **System Properties** window, click on **Environment Variables**.
    3. Under **User variables**, click **New**.
    4. Set the **Variable name** to **VPS_AUTH_TOKEN** and the Variable value to <TOKEN>.
    5. Click **OK** to save.

Once you’ve set the environment variable, you can proceed with using the SDK, as it will automatically pick up the token from the environment for authentication.





## Getting Started

To get started with the Vipas AI Python SDK, you need to create a ModelClient object and use it to make predictions. Below is a step-by-step guide on how to do this.

### `vipas.model.ModelClient.predict(model_id: str, input_data: str, async_mode: bool = True) → dict`

Make a prediction using a deployed model.

#### Parameters:
- `model_id` (str): The unique identifier of the model.
- `input_data` (Any): The input data for the prediction, usually in string format (e.g., base64 encoded image or text data).
- `async_mode` (bool): Whether to perform the prediction asynchronously (default: True).

#### Returns:
- `dict`: A dictionary containing the result of the prediction process.

### Basic Usage

1. Import the necessary modules:
```python
from vipas import model
```

2. Create a ModelClient object:
```python
vps_model_client = model.ModelClient()
```

3. Make a prediction:

```python
model_id = "<MODEL_ID>"
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>")
```

### Handling Exceptions
The SDK provides specific exceptions to handle different error scenarios:

1. UnauthorizedException: Raised when the API key is invalid or missing.
2. NotFoundException: Raised when the model is not found.
3. BadRequestException: Raised when the input data is invalid.
4. ForbiddenException: Raised when the user does not have permission to access the model.
5. ConnectionException: Raised when there is a connection error.
6. RateLimitException: Raised when the rate limit is exceeded.
7. ClientException: Raised when there is a client error.

### Asynchronous Inference Mode
---
Asynchronous Inference Mode is a near-real-time inference option that queues incoming requests and processes them asynchronously. This mode is suitable when you need to handle `large payloads` as they arrive or run models with long inference processing times that do not require sub-second latency. `By default, the predict method operates in asynchronous mode`, which will poll the status endpoint until the result is ready. This is ideal for batch processing or tasks where immediate responses are not critical.


#### Asynchronous Inference Mode Example
```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=True)
```
### Real-Time Inference Mode
---
Real-Time Inference Mode is designed for use cases requiring real-time predictions. In this mode, the predict method processes the request immediately and returns the result without polling the status endpoint. This mode is ideal for applications that need quick, real-time responses and can afford to handle potential timeouts for long-running inferences. It is particularly suitable for interactive applications where users expect immediate feedback.

#### Real-Time Inference Mode Example
```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=False)
```

### Detailed Explanation
#### Asynchronous Inference Mode
---
##### Description:
This mode allows the system to handle requests by queuing them and processing them as resources become available. It is beneficial for scenarios where the inference task might take longer to process, and an immediate response is not necessary.

##### Behavior:
The system polls the status endpoint to check if the result is ready and returns the result once processing is complete.

##### Ideal For:
Batch processing, large payloads, long-running inference tasks.

##### Default Setting:
By default, async_mode is set to True to support heavier inference requests.

##### Example Usage:

```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=True)
```

#### Real-Time Inference Mode
---
##### Description:
This mode is intended for use cases that require immediate results. The system processes the request directly and returns the result without polling.

##### Behavior:
The request is processed immediately, and the result is returned. If the inference takes longer than 29 seconds, a 504 Gateway Timeout error is returned.

##### Ideal For:
Applications requiring sub-second latency, interactive applications needing immediate feedback.

##### Example Usage:

```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=False)
```

By understanding and choosing the appropriate mode for your use case, you can optimize the performance and responsiveness of your AI applications on Vipas.AI.


### Example Usage for ModelClient using asychronous inference mode

```python
from vipas import model
from vipas.exceptions import UnauthorizedException, NotFoundException, ClientException
from vipas.logger import LoggerClient

logger = LoggerClient(__name__)

def main():
    # Create a ModelClient object
    vps_model_client = model.ModelClient()

    # Make a prediction
    try:
        model_id = "<MODEL_ID>"
        api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>")
        logger.info(f"Prediction response: {api_response}")
    except UnauthorizedException as err:
        logger.error(f"UnauthorizedException: {err}")
    except NotFoundException as err:
        logger.error(f"NotFoundException: {err}")
    except ClientException as err:
        logger.error(f"ClientException: {err}")

main()

```
# Creating Model on Vipas.AI Platform

The **Vipas.AI SDK** provides functionality to create new models on the platform, allowing users to define specific parameters and configurations. The `create_model` function enables users to create a model with a unique ID, configure its attributes, and set permissions for its usage.

## Key Features of the `create_model` Function

- **Project Initialization**: Define a project with the type `model` to register it on the platform.
- **Customizable Parameters**: Specify attributes like project name, project description, price, currency, and permissions.
- **Permission-Based Pricing**: If `api_access` permission is set to private, the price is automatically set to zero, ensuring proper access control.
- **Unique Model ID Generation**: Each created model is assigned a unique identifier (`model_id`) for tracking and future operations.

---

## Basic Usage

The `create_model` function simplifies the process of creating a new model on the Vipas.AI platform. Below is a step-by-step guide to creating a model using the SDK:

### `vipas.model.ModelClient.create_model(project_name: str, project_description: str, price: Optional[float] = 0.00, currency: Optional[str] = "INR", permissions: dict) → str`


### Parameters

- **`project_name` (str)**:  
 The name of the project (model). This is a required field and must not be empty. It supports only alphanumeric characters and non-consecutive hyphens (-). The maximum length is 30 characters.

- **`project_description` (str)**:  
  A brief description of the project. This is a required field and must not be empty. It supports only alphanumeric characters and spaces. The maximum length is 60 
  characters.

- **`price` (float)**:  
  The price of the model. This is an optional field with a default value of 0.0. It is only applicable if api_access is set to public. The price must be between 0.00 and 999.00.

- **`currency` (str)**:  
  Specifies the currency. This is an optional field that accepts only the following values: USD, EUR, INR. The default value is INR.

- **`permissions` (dict)**:  
  A dictionary defining permissions for the project. This is a required field and accepts the following keys: `search_visibility`, `api_access`, `share_model`.

  - **`search_visibility` (Optional[str])**: Determines whether the project is visible in search results. Allowed values: `public` or `private`. Default: `private`.
  - **`api_access` (Optional[str])**: Grants or restricts access to use the model via API. Allowed values: `public` or `private`. Default: `private`.
  - **`share_model` (Optional[str])**: Allows or restricts sharing of the model. Allowed values: `public` or `private`. Default: `private`.

---

### Return Value

- **`str`**: A unique `model_id` in string format.


---

## Example Usage

Here's a basic example demonstrating how to create a model using the Vipas.AI SDK:

```python
from vipas.model import ModelClient
from vipas.exceptions import ClientException
from vipas.logger import LoggerClient

# Create a LoggerClient instance
logger_client = LoggerClient(__name__)

try:
    # Define model details
    project_name = "Image-Classification-AI"
    project_description = "ResNet50 based image classification model"
    price = 50.0
    currency = "USD"
    permissions = {
        "search_visibility": "public",
        "api_access": "public",
        "share_model": "private"
    }

    # Create a ModelClient instance
    model_client = ModelClient()

    # Call the create_model method to create a new model
    response = model_client.create_model(
        project_name=project_name,
        project_description=project_description,
        price=price,
        currency=currency,
        permissions=permissions
    )
    logger_client.info(f"Model created successfully: {response}")

except ClientException as e:
    logger_client.error(f"ClientException occurred: {e}")
except Exception as e:
    logger_client.error(f"An unexpected error occurred: {e}")
```

---

## Logging Example for Model Creation

The **Vipas.AI SDK** includes detailed logging to provide insights into the model creation process. Below is an example log sequence:

```
2024-11-20 13:03:57,301 - vipas.model - INFO - Initiating model creation. Name: 'sample-project1', Price: 100.0, Currency: 'USD'
2024-11-20 12:32:49,042 - vipas.model - INFO - Model successfully created with ID: <model-id>. You can view your model at: https://vipas.ai/models/<model-id>.
```

In this log sequence:
- The model creation process starts with logging the model's name.
- The second log confirms that the model was successfully created and provides the unique `model_id`

---

## Handling the Response

The response returned from the `create_model` function contains a `model_id` in string format:
- A `model_id` is a unique identifier for the model.
- You can use the model_id to find and manage your model on [Vipas.AI](https://vipas.ai).

---

## Error Handling

The SDK raises custom exceptions for API responses. Below is a list of possible exceptions and their meanings:

| **Exception**                           | **Description**                                                                                 |
|-----------------------------------------|-------------------------------------------------------------------------------------------------|
| `vipas.exceptions.ClientException` (409)| If the project name already exists.                                                            |
| `vipas.exceptions.UnauthorizedException` (401) | Authentication token is missing, invalid, or expired.                                           |
| `vipas.exceptions.ClientException` (422)| The input data was malformed or incomplete.                                                    |
| `vipas.exceptions.ConnectionException`  | Network connectivity issue or server is unreachable.                                           |
| `vipas.exceptions.ClientException`      | A generic client-side error occurred.                                                          |


## Publishing Model
The Vipas.AI SDK provides a simple and powerful interface for developers to publish, manage, and deploy AI models. With this SDK, developers can upload their models, configure model processors, and deploy them to the Vipas platform seamlessly. This documentation will guide you through the process of using the SDK to publish and manage models built on various machine learning frameworks, including TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn, and more.


### Getting Started
---
### `vipas.model.ModelClient.publish(model_id: str, model_folder_path: str, model_framework_type: str, onnx_config_path: Optional[str] = None, processor_folder_path: Optional[str] = None, processor_image: Optional[str] = None, auto_launch: bool = True, override_model: bool = True) → dict`

Publish a model to the Vipas AI platform.

#### Parameters:
- `model_id` (str): The unique identifier of the model.
- `model_folder_path` (str): The path to the folder containing the model files.
- `model_framework_type` (str): The framework type of the model (e.g., 'tensorflow', 'pytorch', etc.).
- `onnx_config_path` (Optional[str]): The path to the ONNX config file (if applicable).
- `processor_folder_path` (Optional[str]): The path to the processor folder (if using a custom processor).
- `processor_image` (Optional[str]): The Docker image to use for the processor.
- `auto_launch` (bool): Whether to automatically launch the model after publishing (default: True).
- `override_model` (bool): Whether to override the existing model (default: True).

#### Returns:
- `dict`: A dictionary containing the status and details of the model publishing process.

Here is a basic example of how to use the SDK to publish a model from any remote environment:

```python
from vipas.model import ModelClient
from vipas.exceptions import UnauthorizedException, NotFoundException, ClientException


# Paths to model and processor files
model_folder_path = "/path/to/your/model"
onnx_config_path = "/path/to/config/config.pbtxt"  # Optional, depends on framework
processor_folder_path = "/path/to/your/processor"

# Unique model ID to identify the model in Vipas.AI
model_id = "your_model_id" # mdl-xxxxxxxxx

try:
    # Initialize the ModelClient
    model_client = ModelClient()

    # Publish the model
    model_client.publish(
        model_id=model_id,
        model_folder_path=model_folder_path,
        model_framework_type="tensorflow",  # Supported: tensorflow, pytorch, onnx, xgboost, sklearn, etc.
        onnx_config_path=onnx_config_path,  # Required for the ONNX model framework        
        processor_folder_path=processor_folder_path,  # Optional if using custom processors
        processor_image="your-processor-image:latest",  # allowed value are [“vps-processor-base:1.0”]
        auto_launch=True,  # Whether to automatically launch the model after upload, Default True
        override_model=True  # Whether to override existing model deployments, Default True
    )
except UnauthorizedException as e:
    print(f"UnauthorizedException: {e}")
except NotFoundException as e:
    print(f"NotFoundException: {e}")
except ClientException as e:
    print(f"ClientException: {e}")
except Exception as e:
    print(f"Exception: {e}")
```

### Publishing Process Overview
---
When you publish a model using the Vipas SDK, the following steps occur behind the scenes:
1. **Model Upload**: The SDK uploads the model files from the specified directory. The total size of the files is calculated, and the upload process is logged step-by-step.
2. **Processor Upload (Optional)**: If you are using a custom processor (a custom Python script), the SDK uploads the processor files. This step is optional but can be critical for advanced use cases where model input needs specific transformations.
3. **Processor Staging(Optional)**: After the processor upload, the processor will get staged if the files are properly uploaded.
4. **Model Staging And Building Processor**: Once the model and its associated files (including the processor, if applicable) are uploaded, the model is placed in a staging state. This stage ensures that all files are correctly uploaded and prepares the model for deployment.
5. **Model Launch (Optional)**: If the auto_launch parameter is set to True, the model will be automatically launched. This means that the model will be deployed and become available for real-time and asynchronous inference. The launch status is logged until the process is completed successfully.
6. **Rollback Mechanism**: If a model is already deployed and a new version is being uploaded, the SDK ensures that the previous version is rolled back in case of any issues during the new model deployment. 
> **Note:** The Rollback Mechanism will not occur if you make override_model=False.

#### Key parameters
1. **model_id**: The unique identifier for the model. This ID is used to track the model across the platform.
2. **model_folder_path**: The path to the directory containing the model files that need to be uploaded.
3. **model_framework_type**: The framework used for the model (e.g., TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn). Each framework has its own nuances in terms of model configuration.
4. **onnx_config_path[Optional]**: The path to the ONNX configuration file required by the ONNX framework. 
5. **processor_folder_path[Optional]**: The path to the folder containing custom processor file, such as Python script, if applicable. Optional if using a processor.
6. **processor_image[Optional]**: The Docker base image for the processor. Currently supporting “vps-processor-base:1.0”.
7. **auto_launch[Default: True]**: A boolean flag indicating whether to automatically launch the model after publishing. Default is True.
8. **override_model[Default: True]**: A boolean flag indicating whether to override any existing model deployment. Default is True.

#### Supported Frameworks
The SDK supports the following machine learning frameworks:
1. TensorFlow: Native TensorFlow SavedModel format.
2. PyTorch: Model files saved as .pt or .pth.
3. ONNX: ONNX models typically require a configuration file with extensions like (.pbtxt, .config, .txt) for setting input and output shapes.
4. XGBoost: For tree-based models exported from XGBoost.
5. Scikit-learn: For traditional machine learning models exported from scikit-learn.

> ⚠️ **Note:** For ONNX models, you must provide an ONNX configuration file with extensions like `.pbtxt`, `.config`, or `.txt` that describe the input-output mapping.
> 
> Below is an example ONNX configuration for input and output details needed by the model:
> 
> ```yaml
> input [
>  {
>    name: "input1"  # Name of the input going to the model (input tensor)
>    data_type: TYPE_FP32  # Data type of the input, FP32 stands for 32-bit floating point (commonly used in deep learning)
>    dims: [1, 3, 224, 224]  # Dimensions of the input tensor: [Batch size, Channels, Height, Width]
>  }
> ]
> output [
>  {
>    name: "output1"  # Name of the output from the model (output tensor)
>    data_type: TYPE_FP32  # Data type of the output, FP32 represents 32-bit floating point
>    dims: [1, 3, 224, 224]  # Dimensions of the output tensor: [Batch size, Channels, Height, Width]
>  }
> ]
> ```

#### Expected Behavior
1. **Successful Upload**: The model and processor files will be uploaded, and the model will be placed in the staged state.
2. **Automatic Launch**: If auto_launch=True, the model will be launched after the upload completes, making it available for real-time and asynchronous inference.
3. **Override of Existing Models**: If a model with the same model_id is already deployed, the new model will override the previous deployment if override_model=True.

#### Logs Example
Once you run the publish() method, you can expect logs similar to the following:
```bash
2024-10-08 16:15:15,043 - vipas.model - INFO - Publishing model mdl-ikas2ot2ohsux with framework type onnx.
2024-10-08 16:15:19,818 - vipas.model - INFO - File processor.py uploaded successfully.
2024-10-08 16:16:22,952 - vipas.model - INFO - Model mdl-ikas2ot2ohsux and related processor launched successfully.
```

This log sequence shows the entire process of publishing the model, uploading the processor, and successfully launching the model. Any errors or warnings will also be captured in the logs, which can help troubleshoot issues.


# Retrieving Model Deployment Logs with the Vipas.AI SDK

The Vipas.AI SDK provides the `get_logs` function, enabling users to retrieve detailed logs for a specific model. This functionality supports debugging and monitoring by fetching logs associated with the provided `model_id`.

## Key Features of the `get_logs` Function
- **Log Retrieval by Model ID**: Retrieve deployment logs of a specific model by providing its unique identifier.
- **Secure API Access**: Uses the `vps-auth-token` for authentication and ensures secure communication with the API.
- **Detailed Logging**: Provides comprehensive logs for each step of the deployment log retrieval process to ensure transparency and traceability.

## Function Signature
```python
vipas.model.ModelClient.get_logs(model_id: str) → str
```

### Parameters
- `model_id (str)`:  
  The unique identifier of the model whose logs are to be retrieved.  
  **Constraints**:  
  - Required field.  
  - Must be a valid and existing `model_id`.

### Return Value
**`dict`**: A dictionary containing metadata about the logs, including:
- **`filename`**: Name of the log file.
- **`presigned_url`**: A temporary, secure URL to access the log file.
- **`size`**: Size of the log file in bytes.
- **`last_modified`**: Timestamp indicating when the log file was last updated.

The logs provide insights into the model's operation and are structured for easy interpretation.

## Example Usage
Below is an example demonstrating how to use the `get_logs` function to retrieve logs for a specific model:

```python
from vipas.model import ModelClient
from vipas.exceptions import ClientException
from vipas.logger import LoggerClient

# Create a LoggerClient instance
logger_client = LoggerClient(__name__)

try:
    # Define model ID
    model_id = "mdl-1234abcd5678efgxy"

    # Create a ModelClient instance
    model_client = ModelClient()

    # Call the get_logs method to retrieve logs for the model
    logs = model_client.get_logs(model_id=model_id)

    # Display retrieved logs
    logger_client.info(f"Logs retrieved successfully: {logs}")

except ClientException as e:
    logger_client.error(f"ClientException occurred while retrieving logs: {e}")
except Exception as e:
    logger_client.error(f"An unexpected error occurred: {e}")
```

## Handling the Response
The `get_logs` function returns a dictionary containing the model and processor logs. Below is an example response structure:

```json
{
  "model": {},
  "processor": {
    "2024": {
      "11": {
        "21": [
          {
            "filename": "<Name of the log file>",
            "presigned_url": "<A temporary, secure URL to access the log file>",
            "size": "Size of the log file in bytes",
            "last_modified": "Timestamp indicating when the log file was last updated"
          }
        ]
      }
    }
  }
}
```

## Error Handling
The `get_logs` function raises custom exceptions to handle various error scenarios:

| Exception                                | Description                                                                                  |
|------------------------------------------|----------------------------------------------------------------------------------------------|
| `vipas.exceptions.ClientException (409)` | If the `model_id` does not exist or is invalid.                                              |
| `vipas.exceptions.UnauthorizedException (401)` | If the authentication token is missing, invalid, or expired.                                 |
| `vipas.exceptions.ClientException (422)` | If the request parameters are malformed or incomplete.                                       |
| `vipas.exceptions.ConnectionException`   | If there is a network connectivity issue or the API server is unreachable.                   |
| `vipas.exceptions.ClientException`       | A generic client-side error occurred during the log retrieval process.                       |


## Evaluating a Model against a Challenge
The Vipas.AI SDK provides functionality to evaluate your models against specific challenges hosted on the Vipas platform. The evaluate function allows you to submit a model for evaluation against a challenge and track its progress until completion.

### Key Features of the evaluate Function:
---
1. **Model and Challenge Pairing**: You must provide both a model_id and a challenge_id to evaluate your model against a particular challenge.
2. **Progress Tracking**: The SDK tracks the progress of the evaluation in the background and logs the status at regular intervals.
3. **Error Handling**: Specific exceptions like ClientException and general exceptions are captured and handled to ensure smooth operations.


### Basic Usage
---
### `vipas.model.ModelClient.evaluate(model_id: str, challenge_id: str) → dict`

Evaluate a model against a challenge.

#### Parameters:
- `model_id` (str): The unique identifier of the model.
- `challenge_id` (str): The unique identifier of the challenge.

#### Returns:
- `dict`: A dictionary containing the result of the model evaluation process.

Here's a basic example demonstrating how to evaluate a model against a challenge using the Vipas.AI SDK:
```python
from vipas.model import ModelClient
from vipas.exceptions import ClientException
from vipas import config, _rest

try:
    model_id = "mdl-bosb93njhjc97"  # Replace with your model ID
    challenge_id = "chg-2bg7oqy4halgi"  # Replace with the challenge ID

    # Create a ModelClient instance
    model_client = ModelClient()

    # Call the evaluate method to submit the model for evaluation against the challenge
    response = model_client.evaluate(model_id=model_id, challenge_id=challenge_id)

    print(response)

except ClientException as e:
    print(f"ClientException occurred: {e}")
except Exception as e:
    print(f"An unexpected error occurred: {e}")
```

### Logging Example for evaluate
The SDK logs detailed information about the evaluation process, including the model ID and challenge ID being evaluated, as well as the progress of the evaluation. Below is an example of the log output:
```bash
2024-10-17 15:25:19,706 - vipas.model - INFO - Evaluating model mdl-bosb93njhjc97 against the challenge chg-2bg7oqy4halgi.
2024-10-17 15:25:20,472 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.
2024-10-17 15:25:28,261 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.
2024-10-17 15:26:10,805 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi completed successfully.
```
In this log sequence:

* The evaluation process begins by logging the model ID and challenge ID.
* The progress of the evaluation is tracked and logged at regular intervals.
* Finally, upon successful completion, a message indicates the evaluation was successful.

### Handling the Response
---
The response returned from the evaluate function contains detailed information about the evaluation, including:

* Evaluation status (e.g., inprogress, completed, failed).
* Any associated results or metrics generated during the evaluation process.
* Potential error messages, if the evaluation encounters any issues.

By integrating the evaluate function into your workflow, you can efficiently evaluate your models against challenges on the Vipas platform and gain insights into their performance.

## Listing the Submissions of a Challenge

The `get_challenge_submissions` function is a convenient method in the Vipas.AI Python SDK for retrieving all submissions made to a specific challenge on the Vipas AI platform. This function allows developers to programmatically access challenge submissions by providing the unique challenge identifier.

---

### Getting Started

To use the `get_challenge_submissions` function, ensure that the Vipas.AI SDK is installed and properly configured in your environment.

---

### Example: Getting Submissions of a Challenge

```python
from vipas.challenge import ChallengeClient

client = ChallengeClient()

print(client.get_challenge_submissions(challenge_id=<challenge_id>))
```

---

### Key Parameters

- **challenge_id**:  
  The unique identifier for the challenge. This ID is used to track the challenge across the platform.

---

### Returns

- **total_count**:  
  Indicates the total number of challenge runtimes retrieved.

- **challenge_runtimes**:  
  A list of challenge runtime objects, where each object contains:
  - **challenge_id**: Unique identifier for the challenge.
  - **entity_id**: The unique identifier of the user who submitted the model.
  - **entity_name**: Name of the entity (e.g., user's name).
  - **model_id**: ID of the associated model.
  - **transaction_id**: Unique transaction ID for the specific runtime.
  - **challenge_runtime_metrics**: Contains system metrics related to the runtime, including:
    - **latency**: Execution latency in milliseconds.
    - **cpu_metric**: CPU utilization metric in cores.
    - **memory_metric**: Memory utilization metric in MBs.
  - **created_at**: Timestamp when the runtime was created.
  - **updated_at**: Timestamp when the runtime was last updated.
  - **presigned_urls**: Contains temporary URLs to download files related to the runtime:
    - **input_temporary_url**: URL to download the input file.
    - **output_temporary_url**: URL to download the expected output file.
    - **actual_output_temporary_url**: URL to download the actual output file generated by the model.

---

### Response Handling

The response provides detailed information for each user's runtime submission. This includes options to download the input, expected output, and actual output of each submission separately. Additionally, users can access runtime metrics associated with each submission to gain insights into performance and resource utilization.

---

### Error Handling

In case of errors, the SDK raises exceptions:

- **NotFoundException**:  
  Raised when the challenge or submission is not found.
- **ClientException**:  
  Raised for SDK-related errors, such as invalid parameters or authentication issues.
- **Other Exceptions**:  
  Raised for general Python exceptions (e.g., file not found, network errors).

---


## Logging
The SDK provides a LoggerClient class to handle logging. Here's how you can use it:

### LoggerClient Usage

1. Import the `LoggerClient` class:
```python
from vipas.logger import LoggerClient
```

2. Initialize the `LoggerClient`:
```python
logger = LoggerClient(__name__)
```

3. Log messages at different levels:
```python
logger.debug("This is a debug message")
logger.info("This is an info message")
logger.warning("This is a warning message")
logger.error("This is an error message")
logger.critical("This is a critical message")

```

### Example of LoggerClient
Here is a complete example demonstrating the usage of the LoggerClient:

```python
from vipas.logger import LoggerClient

def main():
    logger = LoggerClient(__name__)
    
    logger.info("Starting the main function")
    
    try:
        # Example operation
        result = 10 / 2
        logger.debug(f"Result of division: {result}")
    except ZeroDivisionError as e:
        logger.error("Error occurred: Division by zero")
    except Exception as e:
        logger.critical(f"Unexpected error: {str(e)}")
    finally:
        logger.info("End of the main function")

main()
``` 

## Author
VIPAS.AI

## License
This project is licensed under the terms of the [vipas.ai license](LICENSE.md).

By following the above guidelines, you can effectively use the VIPAS AI Python SDK to interact with the VIPAS AI platform for making predictions, handling exceptions, and logging activities.





            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/vipas-engineering/vipas-python-sdk",
    "name": "vipas",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "Vipas Team",
    "author_email": "contact@vipas.ai",
    "download_url": "https://files.pythonhosted.org/packages/b6/1c/6cf43d2f791d7cf1a2987664dd69037254f8e0b003b3b7904a52b2be75be/vipas-1.0.7.tar.gz",
    "platform": null,
    "description": "# VIPAS AI Platform SDK\nThe Vipas AI Python SDK provides a simple and intuitive interface to interact with the Vipas AI platform. This SDK allows you to easily make predictions using pre-trained models hosted on the Vipas AI platform.\n\n## Table of Contents\n\n- [VIPAS AI Platform SDK](#vipas-ai-platform-sdk)\n  - [Requirements](#requirements)\n  - [Installation & Usage](#installation--usage)\n    - [pip install](#pip-install)\n  - [Prerequisites](#prerequisites)\n    - [Step 1: Fetching the Auth Token](#step-1-fetching-the-auth-token)\n    - [Step 2: Setting the Auth Token as an Environment Variable](#step-2-setting-the-auth-token-as-an-environment-variable)\n  - [Getting Started](#getting-started)\n    - [Basic Usage](#basic-usage)\n    - [Handling Exceptions](#handling-exceptions)\n  - [Asynchronous Inference Mode](#asynchronous-inference-mode)\n    - [Asynchronous Inference Mode Example](#asynchronous-inference-mode-example)\n  - [Real-Time Inference Mode](#real-time-inference-mode)\n    - [Real-Time Inference Mode Example](#real-time-inference-mode-example)\n  - [Creating Model on Vipas.AI Platform](#create-model-on-vipas.ai-platform)\n  - [Publishing Model](#publishing-model)\n    - [Publishing Process Overview](#publishing-process-overview)\n  - [Retrieving Model Deployment Logs with the Vipas.AI SDK](#retrieving-model-deployment-logs-with-the-vipas.AI-SDK)\n  - [Evaluating a Model against a Challenge](#evaluating-a-model-against-a-challenge)\n  - [Listing the submissions of a Challenge](#listing-the-submissions-of-a-challenge)\n  - [Logging](#logging)\n    - [LoggerClient Usage](#loggerclient-usage)\n    - [Example of LoggerClient](#example-of-loggerclient)\n  - [License](#license)\n\n## Requirements.\n\nPython 3.7+\n\n## Installation & Usage\n### pip install\n\nYou can install vipas sdk from the pip repository, using the following command:\n\n```sh\npip install vipas\n```\n(you may need to run `pip` with root permission: `sudo pip install vipas`)\n\nThen import the package:\n```python\nimport vipas\n```\n## Prerequisites\nBefore using the Vipas.AI SDK to manage and publish models, you need to fetch your VPS Auth Token from the Vipas.AI platform and configure it as an environment variable.\n\n#### Step 1: Fetching the Auth Token:-\nThis section explains how to fetch the VPS Auth Token required to authenticate your SDK requests. You can use one of two methods to obtain the token.\n\n##### Method 1: Using the Vipas.AI Platform\n\n1. **Login to Vipas.AI**:  \n   Visit the [Vipas.AI platform](https://vipas.ai/) and log in to your account.\n\n2. **Access Settings**:  \n   Click on your user profile icon in the top-right corner and navigate to the **Settings** page.\n\n3. **Generate the Token**:  \n   Locate the **Temporary Access Token** section, enter your password, and click the button to generate a new token.\n\n4. **Copy the Token**:  \n   Copy the generated token, as you will need it to configure the SDK.\n\n---\n\n##### Method 2: Using the `generate_token` SDK Function\n\n---\n\n\nThe Vipas.AI SDK allows users to programmatically retrieve their authentication token using the `generate_token` function. This token is essential for authenticating SDK requests and ensuring secure access to the platform.\n\n---\n\n### Function Signature\n\n```python\nvipas.user.UserClient.generate_token(username: str, password: str) \u2192 Dict[str, Any]\n```\n\n---\n\n### Parameters\n\n- **`username` (str)**:  \n  The registered username of your Vipas.AI account.  \n  - **Constraints**: Required field, must be a valid username.\n\n- **`password` (str)**:  \n  The registered password of your Vipas.AI account.  \n  - **Constraints**: Required field, must be a valid password.\n\n---\n\n### Return Value\n\nThe `generate_token` function returns a dictionary containing the generated token:\n- **`vps_auth_token`**: The generated authentication token.\n\n---\n\n### Example Usage\n\nHere\u2019s how you can use the `generate_token` function to generate an authentication token:\n\n```python\nfrom vipas.user import UserClient\nfrom vipas.exceptions import ClientException\n\ntry:\n    # Define user credentials\n    username = \"your_username\"\n    password = \"your_password\"\n\n    # Create a UserClient instance\n    user_client = UserClient()\n\n    # Generate the Auth Token\n    auth_response = user_client.generate_token(username=username, password=password)\n\n    # Extract and print the token\n    auth_token = auth_response.get('vps_auth_token')\n    print(f\"Authentication Token: {auth_token}\")\n\nexcept ClientException as e:\n    print(f\"Error generating token: {e}\")\nexcept Exception as e:\n    print(f\"Unexpected error: {e}\")\n```\n\n---\n\n### Handling the Response\n\nThe response from the `generate_token` function is structured as follows:\n\n```json\n{\n  \"vps_auth_token\": \"<Your generated authentication token>\"\n}\n```\n\nThis response provides the token that can be set as an environment variable as mentioned further steps,.\n\n---\n\n### Error Handling\n\nThe `generate_token` function raises exceptions for various error scenarios:\n\n| Exception                                | Description                                                                            |\n|------------------------------------------|----------------------------------------------------------------------------------------|\n| **`vipas.exceptions.ClientException`**  | Raised when the provided username or password is incorrect.                            |\n| **`vipas.exceptions.UnauthorizedException`** | Raised if the authentication request is unauthorized (e.g., invalid credentials).   |\n\n\n---\n\nBy leveraging the `generate_token` function, users can efficiently authenticate their SDK requests and securely interact with the Vipas.AI platform.\n\n#### Step 2: Setting the Auth Token as an Environment Variable:-\nYou need to set the VPS_AUTH_TOKEN as an environment variable to use it within your SDK.\n\n##### For linux and macOS\n1. Open a **Terminal**.\n2. Run the following command to export the token:\n\n    ```bash\n    export VPS_AUTH_TOKEN=<TOKEN>\n    ```\n   Replace <TOKEN> with the actual token you copied from the Vipas.AI UI.\n3. To make it persistent across sessions, add the following line to your **~/.bashrc, ~/.zshrc**, or the corresponding shell configuration file\n\n    ```bash\n    export VPS_AUTH_TOKEN=<TOKEN>\n    ```\n    Then use this command to source it to the current running \n    session\n    ```bash\n    source ~/.bashrc.\n    ```\n##### For Windows\n1. Open **Command Prompt** or **PowerShell**.\n2. Run the following command to set the token for the current session:\n    ```powershell\n    set VPS_AUTH_TOKEN=<TOKEN>\n    ```\n3. To set it permanently, follow these steps:\n    1. Open the Start menu, search for **Environment Variables**, and open the **Edit the system environment variables** option.\n    2. In the **System Properties** window, click on **Environment Variables**.\n    3. Under **User variables**, click **New**.\n    4. Set the **Variable name** to **VPS_AUTH_TOKEN** and the Variable value to <TOKEN>.\n    5. Click **OK** to save.\n\nOnce you\u2019ve set the environment variable, you can proceed with using the SDK, as it will automatically pick up the token from the environment for authentication.\n\n\n\n\n\n## Getting Started\n\nTo get started with the Vipas AI Python SDK, you need to create a ModelClient object and use it to make predictions. Below is a step-by-step guide on how to do this.\n\n### `vipas.model.ModelClient.predict(model_id: str, input_data: str, async_mode: bool = True) \u2192 dict`\n\nMake a prediction using a deployed model.\n\n#### Parameters:\n- `model_id` (str): The unique identifier of the model.\n- `input_data` (Any): The input data for the prediction, usually in string format (e.g., base64 encoded image or text data).\n- `async_mode` (bool): Whether to perform the prediction asynchronously (default: True).\n\n#### Returns:\n- `dict`: A dictionary containing the result of the prediction process.\n\n### Basic Usage\n\n1. Import the necessary modules:\n```python\nfrom vipas import model\n```\n\n2. Create a ModelClient object:\n```python\nvps_model_client = model.ModelClient()\n```\n\n3. Make a prediction:\n\n```python\nmodel_id = \"<MODEL_ID>\"\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\")\n```\n\n### Handling Exceptions\nThe SDK provides specific exceptions to handle different error scenarios:\n\n1. UnauthorizedException: Raised when the API key is invalid or missing.\n2. NotFoundException: Raised when the model is not found.\n3. BadRequestException: Raised when the input data is invalid.\n4. ForbiddenException: Raised when the user does not have permission to access the model.\n5. ConnectionException: Raised when there is a connection error.\n6. RateLimitException: Raised when the rate limit is exceeded.\n7. ClientException: Raised when there is a client error.\n\n### Asynchronous Inference Mode\n---\nAsynchronous Inference Mode is a near-real-time inference option that queues incoming requests and processes them asynchronously. This mode is suitable when you need to handle `large payloads` as they arrive or run models with long inference processing times that do not require sub-second latency. `By default, the predict method operates in asynchronous mode`, which will poll the status endpoint until the result is ready. This is ideal for batch processing or tasks where immediate responses are not critical.\n\n\n#### Asynchronous Inference Mode Example\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=True)\n```\n### Real-Time Inference Mode\n---\nReal-Time Inference Mode is designed for use cases requiring real-time predictions. In this mode, the predict method processes the request immediately and returns the result without polling the status endpoint. This mode is ideal for applications that need quick, real-time responses and can afford to handle potential timeouts for long-running inferences. It is particularly suitable for interactive applications where users expect immediate feedback.\n\n#### Real-Time Inference Mode Example\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=False)\n```\n\n### Detailed Explanation\n#### Asynchronous Inference Mode\n---\n##### Description:\nThis mode allows the system to handle requests by queuing them and processing them as resources become available. It is beneficial for scenarios where the inference task might take longer to process, and an immediate response is not necessary.\n\n##### Behavior:\nThe system polls the status endpoint to check if the result is ready and returns the result once processing is complete.\n\n##### Ideal For:\nBatch processing, large payloads, long-running inference tasks.\n\n##### Default Setting:\nBy default, async_mode is set to True to support heavier inference requests.\n\n##### Example Usage:\n\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=True)\n```\n\n#### Real-Time Inference Mode\n---\n##### Description:\nThis mode is intended for use cases that require immediate results. The system processes the request directly and returns the result without polling.\n\n##### Behavior:\nThe request is processed immediately, and the result is returned. If the inference takes longer than 29 seconds, a 504 Gateway Timeout error is returned.\n\n##### Ideal For:\nApplications requiring sub-second latency, interactive applications needing immediate feedback.\n\n##### Example Usage:\n\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=False)\n```\n\nBy understanding and choosing the appropriate mode for your use case, you can optimize the performance and responsiveness of your AI applications on Vipas.AI.\n\n\n### Example Usage for ModelClient using asychronous inference mode\n\n```python\nfrom vipas import model\nfrom vipas.exceptions import UnauthorizedException, NotFoundException, ClientException\nfrom vipas.logger import LoggerClient\n\nlogger = LoggerClient(__name__)\n\ndef main():\n    # Create a ModelClient object\n    vps_model_client = model.ModelClient()\n\n    # Make a prediction\n    try:\n        model_id = \"<MODEL_ID>\"\n        api_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\")\n        logger.info(f\"Prediction response: {api_response}\")\n    except UnauthorizedException as err:\n        logger.error(f\"UnauthorizedException: {err}\")\n    except NotFoundException as err:\n        logger.error(f\"NotFoundException: {err}\")\n    except ClientException as err:\n        logger.error(f\"ClientException: {err}\")\n\nmain()\n\n```\n# Creating Model on Vipas.AI Platform\n\nThe **Vipas.AI SDK** provides functionality to create new models on the platform, allowing users to define specific parameters and configurations. The `create_model` function enables users to create a model with a unique ID, configure its attributes, and set permissions for its usage.\n\n## Key Features of the `create_model` Function\n\n- **Project Initialization**: Define a project with the type `model` to register it on the platform.\n- **Customizable Parameters**: Specify attributes like project name, project description, price, currency, and permissions.\n- **Permission-Based Pricing**: If `api_access` permission is set to private, the price is automatically set to zero, ensuring proper access control.\n- **Unique Model ID Generation**: Each created model is assigned a unique identifier (`model_id`) for tracking and future operations.\n\n---\n\n## Basic Usage\n\nThe `create_model` function simplifies the process of creating a new model on the Vipas.AI platform. Below is a step-by-step guide to creating a model using the SDK:\n\n### `vipas.model.ModelClient.create_model(project_name: str, project_description: str, price: Optional[float] = 0.00, currency: Optional[str] = \"INR\", permissions: dict) \u2192 str`\n\n\n### Parameters\n\n- **`project_name` (str)**:  \n The name of the project (model). This is a required field and must not be empty. It supports only alphanumeric characters and non-consecutive hyphens (-). The maximum length is 30 characters.\n\n- **`project_description` (str)**:  \n  A brief description of the project. This is a required field and must not be empty. It supports only alphanumeric characters and spaces. The maximum length is 60 \n  characters.\n\n- **`price` (float)**:  \n  The price of the model. This is an optional field with a default value of 0.0. It is only applicable if api_access is set to public. The price must be between 0.00 and 999.00.\n\n- **`currency` (str)**:  \n  Specifies the currency. This is an optional field that accepts only the following values: USD, EUR, INR. The default value is INR.\n\n- **`permissions` (dict)**:  \n  A dictionary defining permissions for the project. This is a required field and accepts the following keys: `search_visibility`, `api_access`, `share_model`.\n\n  - **`search_visibility` (Optional[str])**: Determines whether the project is visible in search results. Allowed values: `public` or `private`. Default: `private`.\n  - **`api_access` (Optional[str])**: Grants or restricts access to use the model via API. Allowed values: `public` or `private`. Default: `private`.\n  - **`share_model` (Optional[str])**: Allows or restricts sharing of the model. Allowed values: `public` or `private`. Default: `private`.\n\n---\n\n### Return Value\n\n- **`str`**: A unique `model_id` in string format.\n\n\n---\n\n## Example Usage\n\nHere's a basic example demonstrating how to create a model using the Vipas.AI SDK:\n\n```python\nfrom vipas.model import ModelClient\nfrom vipas.exceptions import ClientException\nfrom vipas.logger import LoggerClient\n\n# Create a LoggerClient instance\nlogger_client = LoggerClient(__name__)\n\ntry:\n    # Define model details\n    project_name = \"Image-Classification-AI\"\n    project_description = \"ResNet50 based image classification model\"\n    price = 50.0\n    currency = \"USD\"\n    permissions = {\n        \"search_visibility\": \"public\",\n        \"api_access\": \"public\",\n        \"share_model\": \"private\"\n    }\n\n    # Create a ModelClient instance\n    model_client = ModelClient()\n\n    # Call the create_model method to create a new model\n    response = model_client.create_model(\n        project_name=project_name,\n        project_description=project_description,\n        price=price,\n        currency=currency,\n        permissions=permissions\n    )\n    logger_client.info(f\"Model created successfully: {response}\")\n\nexcept ClientException as e:\n    logger_client.error(f\"ClientException occurred: {e}\")\nexcept Exception as e:\n    logger_client.error(f\"An unexpected error occurred: {e}\")\n```\n\n---\n\n## Logging Example for Model Creation\n\nThe **Vipas.AI SDK** includes detailed logging to provide insights into the model creation process. Below is an example log sequence:\n\n```\n2024-11-20 13:03:57,301 - vipas.model - INFO - Initiating model creation. Name: 'sample-project1', Price: 100.0, Currency: 'USD'\n2024-11-20 12:32:49,042 - vipas.model - INFO - Model successfully created with ID: <model-id>. You can view your model at: https://vipas.ai/models/<model-id>.\n```\n\nIn this log sequence:\n- The model creation process starts with logging the model's name.\n- The second log confirms that the model was successfully created and provides the unique `model_id`\n\n---\n\n## Handling the Response\n\nThe response returned from the `create_model` function contains a `model_id` in string format:\n- A `model_id` is a unique identifier for the model.\n- You can use the model_id to find and manage your model on [Vipas.AI](https://vipas.ai).\n\n---\n\n## Error Handling\n\nThe SDK raises custom exceptions for API responses. Below is a list of possible exceptions and their meanings:\n\n| **Exception**                           | **Description**                                                                                 |\n|-----------------------------------------|-------------------------------------------------------------------------------------------------|\n| `vipas.exceptions.ClientException` (409)| If the project name already exists.                                                            |\n| `vipas.exceptions.UnauthorizedException` (401) | Authentication token is missing, invalid, or expired.                                           |\n| `vipas.exceptions.ClientException` (422)| The input data was malformed or incomplete.                                                    |\n| `vipas.exceptions.ConnectionException`  | Network connectivity issue or server is unreachable.                                           |\n| `vipas.exceptions.ClientException`      | A generic client-side error occurred.                                                          |\n\n\n## Publishing Model\nThe Vipas.AI SDK provides a simple and powerful interface for developers to publish, manage, and deploy AI models. With this SDK, developers can upload their models, configure model processors, and deploy them to the Vipas platform seamlessly. This documentation will guide you through the process of using the SDK to publish and manage models built on various machine learning frameworks, including TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn, and more.\n\n\n### Getting Started\n---\n### `vipas.model.ModelClient.publish(model_id: str, model_folder_path: str, model_framework_type: str, onnx_config_path: Optional[str] = None, processor_folder_path: Optional[str] = None, processor_image: Optional[str] = None, auto_launch: bool = True, override_model: bool = True) \u2192 dict`\n\nPublish a model to the Vipas AI platform.\n\n#### Parameters:\n- `model_id` (str): The unique identifier of the model.\n- `model_folder_path` (str): The path to the folder containing the model files.\n- `model_framework_type` (str): The framework type of the model (e.g., 'tensorflow', 'pytorch', etc.).\n- `onnx_config_path` (Optional[str]): The path to the ONNX config file (if applicable).\n- `processor_folder_path` (Optional[str]): The path to the processor folder (if using a custom processor).\n- `processor_image` (Optional[str]): The Docker image to use for the processor.\n- `auto_launch` (bool): Whether to automatically launch the model after publishing (default: True).\n- `override_model` (bool): Whether to override the existing model (default: True).\n\n#### Returns:\n- `dict`: A dictionary containing the status and details of the model publishing process.\n\nHere is a basic example of how to use the SDK to publish a model from any remote environment:\n\n```python\nfrom vipas.model import ModelClient\nfrom vipas.exceptions import UnauthorizedException, NotFoundException, ClientException\n\n\n# Paths to model and processor files\nmodel_folder_path = \"/path/to/your/model\"\nonnx_config_path = \"/path/to/config/config.pbtxt\"  # Optional, depends on framework\nprocessor_folder_path = \"/path/to/your/processor\"\n\n# Unique model ID to identify the model in Vipas.AI\nmodel_id = \"your_model_id\" # mdl-xxxxxxxxx\n\ntry:\n    # Initialize the ModelClient\n    model_client = ModelClient()\n\n    # Publish the model\n    model_client.publish(\n        model_id=model_id,\n        model_folder_path=model_folder_path,\n        model_framework_type=\"tensorflow\",  # Supported: tensorflow, pytorch, onnx, xgboost, sklearn, etc.\n        onnx_config_path=onnx_config_path,  # Required for the ONNX model framework        \n        processor_folder_path=processor_folder_path,  # Optional if using custom processors\n        processor_image=\"your-processor-image:latest\",  # allowed value are [\u201cvps-processor-base:1.0\u201d]\n        auto_launch=True,  # Whether to automatically launch the model after upload, Default True\n        override_model=True  # Whether to override existing model deployments, Default True\n    )\nexcept UnauthorizedException as e:\n    print(f\"UnauthorizedException: {e}\")\nexcept NotFoundException as e:\n    print(f\"NotFoundException: {e}\")\nexcept ClientException as e:\n    print(f\"ClientException: {e}\")\nexcept Exception as e:\n    print(f\"Exception: {e}\")\n```\n\n### Publishing Process Overview\n---\nWhen you publish a model using the Vipas SDK, the following steps occur behind the scenes:\n1. **Model Upload**: The SDK uploads the model files from the specified directory. The total size of the files is calculated, and the upload process is logged step-by-step.\n2. **Processor Upload (Optional)**: If you are using a custom processor (a custom Python script), the SDK uploads the processor files. This step is optional but can be critical for advanced use cases where model input needs specific transformations.\n3. **Processor Staging(Optional)**: After the processor upload, the processor will get staged if the files are properly uploaded.\n4. **Model Staging And Building Processor**: Once the model and its associated files (including the processor, if applicable) are uploaded, the model is placed in a staging state. This stage ensures that all files are correctly uploaded and prepares the model for deployment.\n5. **Model Launch (Optional)**: If the auto_launch parameter is set to True, the model will be automatically launched. This means that the model will be deployed and become available for real-time and asynchronous inference. The launch status is logged until the process is completed successfully.\n6. **Rollback Mechanism**: If a model is already deployed and a new version is being uploaded, the SDK ensures that the previous version is rolled back in case of any issues during the new model deployment. \n> **Note:** The Rollback Mechanism will not occur if you make override_model=False.\n\n#### Key parameters\n1. **model_id**: The unique identifier for the model. This ID is used to track the model across the platform.\n2. **model_folder_path**: The path to the directory containing the model files that need to be uploaded.\n3. **model_framework_type**: The framework used for the model (e.g., TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn). Each framework has its own nuances in terms of model configuration.\n4. **onnx_config_path[Optional]**: The path to the ONNX configuration file required by the ONNX framework. \n5. **processor_folder_path[Optional]**: The path to the folder containing custom processor file, such as Python script, if applicable. Optional if using a processor.\n6. **processor_image[Optional]**: The Docker base image for the processor. Currently supporting \u201cvps-processor-base:1.0\u201d.\n7. **auto_launch[Default: True]**: A boolean flag indicating whether to automatically launch the model after publishing. Default is True.\n8. **override_model[Default: True]**: A boolean flag indicating whether to override any existing model deployment. Default is True.\n\n#### Supported Frameworks\nThe SDK supports the following machine learning frameworks:\n1. TensorFlow: Native TensorFlow SavedModel format.\n2. PyTorch: Model files saved as .pt or .pth.\n3. ONNX: ONNX models typically require a configuration file with extensions like (.pbtxt, .config, .txt) for setting input and output shapes.\n4. XGBoost: For tree-based models exported from XGBoost.\n5. Scikit-learn: For traditional machine learning models exported from scikit-learn.\n\n> \u26a0\ufe0f **Note:** For ONNX models, you must provide an ONNX configuration file with extensions like `.pbtxt`, `.config`, or `.txt` that describe the input-output mapping.\n> \n> Below is an example ONNX configuration for input and output details needed by the model:\n> \n> ```yaml\n> input [\n>  {\n>    name: \"input1\"  # Name of the input going to the model (input tensor)\n>    data_type: TYPE_FP32  # Data type of the input, FP32 stands for 32-bit floating point (commonly used in deep learning)\n>    dims: [1, 3, 224, 224]  # Dimensions of the input tensor: [Batch size, Channels, Height, Width]\n>  }\n> ]\n> output [\n>  {\n>    name: \"output1\"  # Name of the output from the model (output tensor)\n>    data_type: TYPE_FP32  # Data type of the output, FP32 represents 32-bit floating point\n>    dims: [1, 3, 224, 224]  # Dimensions of the output tensor: [Batch size, Channels, Height, Width]\n>  }\n> ]\n> ```\n\n#### Expected Behavior\n1. **Successful Upload**: The model and processor files will be uploaded, and the model will be placed in the staged state.\n2. **Automatic Launch**: If auto_launch=True, the model will be launched after the upload completes, making it available for real-time and asynchronous inference.\n3. **Override of Existing Models**: If a model with the same model_id is already deployed, the new model will override the previous deployment if override_model=True.\n\n#### Logs Example\nOnce you run the publish() method, you can expect logs similar to the following:\n```bash\n2024-10-08 16:15:15,043 - vipas.model - INFO - Publishing model mdl-ikas2ot2ohsux with framework type onnx.\n2024-10-08 16:15:19,818 - vipas.model - INFO - File processor.py uploaded successfully.\n2024-10-08 16:16:22,952 - vipas.model - INFO - Model mdl-ikas2ot2ohsux and related processor launched successfully.\n```\n\nThis log sequence shows the entire process of publishing the model, uploading the processor, and successfully launching the model. Any errors or warnings will also be captured in the logs, which can help troubleshoot issues.\n\n\n# Retrieving Model Deployment Logs with the Vipas.AI SDK\n\nThe Vipas.AI SDK provides the `get_logs` function, enabling users to retrieve detailed logs for a specific model. This functionality supports debugging and monitoring by fetching logs associated with the provided `model_id`.\n\n## Key Features of the `get_logs` Function\n- **Log Retrieval by Model ID**: Retrieve deployment logs of a specific model by providing its unique identifier.\n- **Secure API Access**: Uses the `vps-auth-token` for authentication and ensures secure communication with the API.\n- **Detailed Logging**: Provides comprehensive logs for each step of the deployment log retrieval process to ensure transparency and traceability.\n\n## Function Signature\n```python\nvipas.model.ModelClient.get_logs(model_id: str) \u2192 str\n```\n\n### Parameters\n- `model_id (str)`:  \n  The unique identifier of the model whose logs are to be retrieved.  \n  **Constraints**:  \n  - Required field.  \n  - Must be a valid and existing `model_id`.\n\n### Return Value\n**`dict`**: A dictionary containing metadata about the logs, including:\n- **`filename`**: Name of the log file.\n- **`presigned_url`**: A temporary, secure URL to access the log file.\n- **`size`**: Size of the log file in bytes.\n- **`last_modified`**: Timestamp indicating when the log file was last updated.\n\nThe logs provide insights into the model's operation and are structured for easy interpretation.\n\n## Example Usage\nBelow is an example demonstrating how to use the `get_logs` function to retrieve logs for a specific model:\n\n```python\nfrom vipas.model import ModelClient\nfrom vipas.exceptions import ClientException\nfrom vipas.logger import LoggerClient\n\n# Create a LoggerClient instance\nlogger_client = LoggerClient(__name__)\n\ntry:\n    # Define model ID\n    model_id = \"mdl-1234abcd5678efgxy\"\n\n    # Create a ModelClient instance\n    model_client = ModelClient()\n\n    # Call the get_logs method to retrieve logs for the model\n    logs = model_client.get_logs(model_id=model_id)\n\n    # Display retrieved logs\n    logger_client.info(f\"Logs retrieved successfully: {logs}\")\n\nexcept ClientException as e:\n    logger_client.error(f\"ClientException occurred while retrieving logs: {e}\")\nexcept Exception as e:\n    logger_client.error(f\"An unexpected error occurred: {e}\")\n```\n\n## Handling the Response\nThe `get_logs` function returns a dictionary containing the model and processor logs. Below is an example response structure:\n\n```json\n{\n  \"model\": {},\n  \"processor\": {\n    \"2024\": {\n      \"11\": {\n        \"21\": [\n          {\n            \"filename\": \"<Name of the log file>\",\n            \"presigned_url\": \"<A temporary, secure URL to access the log file>\",\n            \"size\": \"Size of the log file in bytes\",\n            \"last_modified\": \"Timestamp indicating when the log file was last updated\"\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n## Error Handling\nThe `get_logs` function raises custom exceptions to handle various error scenarios:\n\n| Exception                                | Description                                                                                  |\n|------------------------------------------|----------------------------------------------------------------------------------------------|\n| `vipas.exceptions.ClientException (409)` | If the `model_id` does not exist or is invalid.                                              |\n| `vipas.exceptions.UnauthorizedException (401)` | If the authentication token is missing, invalid, or expired.                                 |\n| `vipas.exceptions.ClientException (422)` | If the request parameters are malformed or incomplete.                                       |\n| `vipas.exceptions.ConnectionException`   | If there is a network connectivity issue or the API server is unreachable.                   |\n| `vipas.exceptions.ClientException`       | A generic client-side error occurred during the log retrieval process.                       |\n\n\n## Evaluating a Model against a Challenge\nThe Vipas.AI SDK provides functionality to evaluate your models against specific challenges hosted on the Vipas platform. The evaluate function allows you to submit a model for evaluation against a challenge and track its progress until completion.\n\n### Key Features of the evaluate Function:\n---\n1. **Model and Challenge Pairing**: You must provide both a model_id and a challenge_id to evaluate your model against a particular challenge.\n2. **Progress Tracking**: The SDK tracks the progress of the evaluation in the background and logs the status at regular intervals.\n3. **Error Handling**: Specific exceptions like ClientException and general exceptions are captured and handled to ensure smooth operations.\n\n\n### Basic Usage\n---\n### `vipas.model.ModelClient.evaluate(model_id: str, challenge_id: str) \u2192 dict`\n\nEvaluate a model against a challenge.\n\n#### Parameters:\n- `model_id` (str): The unique identifier of the model.\n- `challenge_id` (str): The unique identifier of the challenge.\n\n#### Returns:\n- `dict`: A dictionary containing the result of the model evaluation process.\n\nHere's a basic example demonstrating how to evaluate a model against a challenge using the Vipas.AI SDK:\n```python\nfrom vipas.model import ModelClient\nfrom vipas.exceptions import ClientException\nfrom vipas import config, _rest\n\ntry:\n    model_id = \"mdl-bosb93njhjc97\"  # Replace with your model ID\n    challenge_id = \"chg-2bg7oqy4halgi\"  # Replace with the challenge ID\n\n    # Create a ModelClient instance\n    model_client = ModelClient()\n\n    # Call the evaluate method to submit the model for evaluation against the challenge\n    response = model_client.evaluate(model_id=model_id, challenge_id=challenge_id)\n\n    print(response)\n\nexcept ClientException as e:\n    print(f\"ClientException occurred: {e}\")\nexcept Exception as e:\n    print(f\"An unexpected error occurred: {e}\")\n```\n\n### Logging Example for evaluate\nThe SDK logs detailed information about the evaluation process, including the model ID and challenge ID being evaluated, as well as the progress of the evaluation. Below is an example of the log output:\n```bash\n2024-10-17 15:25:19,706 - vipas.model - INFO - Evaluating model mdl-bosb93njhjc97 against the challenge chg-2bg7oqy4halgi.\n2024-10-17 15:25:20,472 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.\n2024-10-17 15:25:28,261 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.\n2024-10-17 15:26:10,805 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi completed successfully.\n```\nIn this log sequence:\n\n* The evaluation process begins by logging the model ID and challenge ID.\n* The progress of the evaluation is tracked and logged at regular intervals.\n* Finally, upon successful completion, a message indicates the evaluation was successful.\n\n### Handling the Response\n---\nThe response returned from the evaluate function contains detailed information about the evaluation, including:\n\n* Evaluation status (e.g., inprogress, completed, failed).\n* Any associated results or metrics generated during the evaluation process.\n* Potential error messages, if the evaluation encounters any issues.\n\nBy integrating the evaluate function into your workflow, you can efficiently evaluate your models against challenges on the Vipas platform and gain insights into their performance.\n\n## Listing the Submissions of a Challenge\n\nThe `get_challenge_submissions` function is a convenient method in the Vipas.AI Python SDK for retrieving all submissions made to a specific challenge on the Vipas AI platform. This function allows developers to programmatically access challenge submissions by providing the unique challenge identifier.\n\n---\n\n### Getting Started\n\nTo use the `get_challenge_submissions` function, ensure that the Vipas.AI SDK is installed and properly configured in your environment.\n\n---\n\n### Example: Getting Submissions of a Challenge\n\n```python\nfrom vipas.challenge import ChallengeClient\n\nclient = ChallengeClient()\n\nprint(client.get_challenge_submissions(challenge_id=<challenge_id>))\n```\n\n---\n\n### Key Parameters\n\n- **challenge_id**:  \n  The unique identifier for the challenge. This ID is used to track the challenge across the platform.\n\n---\n\n### Returns\n\n- **total_count**:  \n  Indicates the total number of challenge runtimes retrieved.\n\n- **challenge_runtimes**:  \n  A list of challenge runtime objects, where each object contains:\n  - **challenge_id**: Unique identifier for the challenge.\n  - **entity_id**: The unique identifier of the user who submitted the model.\n  - **entity_name**: Name of the entity (e.g., user's name).\n  - **model_id**: ID of the associated model.\n  - **transaction_id**: Unique transaction ID for the specific runtime.\n  - **challenge_runtime_metrics**: Contains system metrics related to the runtime, including:\n    - **latency**: Execution latency in milliseconds.\n    - **cpu_metric**: CPU utilization metric in cores.\n    - **memory_metric**: Memory utilization metric in MBs.\n  - **created_at**: Timestamp when the runtime was created.\n  - **updated_at**: Timestamp when the runtime was last updated.\n  - **presigned_urls**: Contains temporary URLs to download files related to the runtime:\n    - **input_temporary_url**: URL to download the input file.\n    - **output_temporary_url**: URL to download the expected output file.\n    - **actual_output_temporary_url**: URL to download the actual output file generated by the model.\n\n---\n\n### Response Handling\n\nThe response provides detailed information for each user's runtime submission. This includes options to download the input, expected output, and actual output of each submission separately. Additionally, users can access runtime metrics associated with each submission to gain insights into performance and resource utilization.\n\n---\n\n### Error Handling\n\nIn case of errors, the SDK raises exceptions:\n\n- **NotFoundException**:  \n  Raised when the challenge or submission is not found.\n- **ClientException**:  \n  Raised for SDK-related errors, such as invalid parameters or authentication issues.\n- **Other Exceptions**:  \n  Raised for general Python exceptions (e.g., file not found, network errors).\n\n---\n\n\n## Logging\nThe SDK provides a LoggerClient class to handle logging. Here's how you can use it:\n\n### LoggerClient Usage\n\n1. Import the `LoggerClient` class:\n```python\nfrom vipas.logger import LoggerClient\n```\n\n2. Initialize the `LoggerClient`:\n```python\nlogger = LoggerClient(__name__)\n```\n\n3. Log messages at different levels:\n```python\nlogger.debug(\"This is a debug message\")\nlogger.info(\"This is an info message\")\nlogger.warning(\"This is a warning message\")\nlogger.error(\"This is an error message\")\nlogger.critical(\"This is a critical message\")\n\n```\n\n### Example of LoggerClient\nHere is a complete example demonstrating the usage of the LoggerClient:\n\n```python\nfrom vipas.logger import LoggerClient\n\ndef main():\n    logger = LoggerClient(__name__)\n    \n    logger.info(\"Starting the main function\")\n    \n    try:\n        # Example operation\n        result = 10 / 2\n        logger.debug(f\"Result of division: {result}\")\n    except ZeroDivisionError as e:\n        logger.error(\"Error occurred: Division by zero\")\n    except Exception as e:\n        logger.critical(f\"Unexpected error: {str(e)}\")\n    finally:\n        logger.info(\"End of the main function\")\n\nmain()\n``` \n\n## Author\nVIPAS.AI\n\n## License\nThis project is licensed under the terms of the [vipas.ai license](LICENSE.md).\n\nBy following the above guidelines, you can effectively use the VIPAS AI Python SDK to interact with the VIPAS AI platform for making predictions, handling exceptions, and logging activities.\n\n\n\n\n",
    "bugtrack_url": null,
    "license": " Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.  END OF TERMS AND CONDITIONS",
    "summary": "Python SDK for Vipas AI Platform",
    "version": "1.0.7",
    "project_urls": {
        "Homepage": "https://github.com/vipas-engineering/vipas-python-sdk"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dd9ed1cd78aa01bfb0cdf184f1b18b8c7bfaff9fdb439bb542b41dc87047d15d",
                "md5": "ab9761eacfc43ee4cc41a019c98d6c7b",
                "sha256": "7a96a7821fffe0260a5970e1b268eb046534bfe3d16485c20a3bc7e903852fcd"
            },
            "downloads": -1,
            "filename": "vipas-1.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ab9761eacfc43ee4cc41a019c98d6c7b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 38441,
            "upload_time": "2024-11-25T07:07:37",
            "upload_time_iso_8601": "2024-11-25T07:07:37.206528Z",
            "url": "https://files.pythonhosted.org/packages/dd/9e/d1cd78aa01bfb0cdf184f1b18b8c7bfaff9fdb439bb542b41dc87047d15d/vipas-1.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b61c6cf43d2f791d7cf1a2987664dd69037254f8e0b003b3b7904a52b2be75be",
                "md5": "9156d9614f3279f777d8cb7a59f34e1a",
                "sha256": "283aea3b509499d761876220b7b8fb92d5be7ef3995d1b626243bc072098732e"
            },
            "downloads": -1,
            "filename": "vipas-1.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "9156d9614f3279f777d8cb7a59f34e1a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 56458,
            "upload_time": "2024-11-25T07:07:40",
            "upload_time_iso_8601": "2024-11-25T07:07:40.399798Z",
            "url": "https://files.pythonhosted.org/packages/b6/1c/6cf43d2f791d7cf1a2987664dd69037254f8e0b003b3b7904a52b2be75be/vipas-1.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-25 07:07:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vipas-engineering",
    "github_project": "vipas-python-sdk",
    "github_not_found": true,
    "lcname": "vipas"
}
        
Elapsed time: 0.39794s