vipas


Namevipas JSON
Version 1.0.4 PyPI version JSON
download
home_pagehttps://github.com/vipas-engineering/vipas-python-sdk
SummaryPython SDK for Vipas AI Platform
upload_time2024-10-18 12:57:06
maintainerNone
docs_urlNone
authorVipas Team
requires_python>=3.7
license Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # VIPAS AI Platform SDK
The Vipas AI Python SDK provides a simple and intuitive interface to interact with the Vipas AI platform. This SDK allows you to easily make predictions using pre-trained models hosted on the Vipas AI platform.

## Table of Contents

- [VIPAS AI Platform SDK](#vipas-ai-platform-sdk)
  - [Requirements](#requirements)
  - [Installation & Usage](#installation--usage)
    - [pip install](#pip-install)
  - [Prerequisites](#prerequisites)
    - [Step 1: Fetching the Auth Token](#step-1-fetching-the-auth-token)
    - [Step 2: Setting the Auth Token as an Environment Variable](#step-2-setting-the-auth-token-as-an-environment-variable)
  - [Getting Started](#getting-started)
    - [Basic Usage](#basic-usage)
    - [Handling Exceptions](#handling-exceptions)
  - [Asynchronous Inference Mode](#asynchronous-inference-mode)
    - [Asynchronous Inference Mode Example](#asynchronous-inference-mode-example)
  - [Real-Time Inference Mode](#real-time-inference-mode)
    - [Real-Time Inference Mode Example](#real-time-inference-mode-example)
  - [Publishing Model](#publishing-model)
    - [Publishing Process Overview](#publishing-process-overview)
  - [Evaluating a Model against a Challenge](#evaluating-a-model-against-a-challenge)
  - [Logging](#logging)
    - [LoggerClient Usage](#loggerclient-usage)
    - [Example of LoggerClient](#example-of-loggerclient)
  - [License](#license)

## Requirements.

Python 3.7+

## Installation & Usage
### pip install

You can install vipas sdk from the pip repository, using the following command:

```sh
pip install vipas
```
(you may need to run `pip` with root permission: `sudo pip install vipas`)

Then import the package:
```python
import vipas
```
## Prerequisites
Before using the Vipas.AI SDK to manage and publish models, you need to fetch your VPS Auth Token from the Vipas.AI platform and configure it as an environment variable.

#### Step 1: Fetching the Auth Token:-
1. Login to Vipas.AI: Go to the [Vipas.AI](https://vipas.ai) platform and log in to your account.
2. Navigate to Settings: Click on your user profile icon in the top right corner and navigate to Settings.
Generate Auth Token: In the settings, locate the **Temporary Access Token** section, enter your password. 
3. Click the button to generate a new Auth Token.
Copy the Token: Once generated, copy the token. This token is required to authenticate your SDK requests.

#### Step 2: Setting the Auth Token as an Environment Variable:-
You need to set the VPS_AUTH_TOKEN as an environment variable to use it within your SDK.

##### For linux and macOS
1. Open a **Terminal**.
2. Run the following command to export the token:

    ```bash
    export VPS_AUTH_TOKEN=<TOKEN>
    ```
   Replace <TOKEN> with the actual token you copied from the Vipas.AI UI.
3. To make it persistent across sessions, add the following line to your **~/.bashrc, ~/.zshrc**, or the corresponding shell configuration file

    ```bash
    export VPS_AUTH_TOKEN=<TOKEN>
    ```
    Then use this command to source it to the current running 
    session
    ```bash
    source ~/.bashrc.
    ```
##### For Windows
1. Open **Command Prompt** or **PowerShell**.
2. Run the following command to set the token for the current session:
    ```powershell
    set VPS_AUTH_TOKEN=<TOKEN>
    ```
3. To set it permanently, follow these steps:
    1. Open the Start menu, search for **Environment Variables**, and open the **Edit the system environment variables** option.
    2. In the **System Properties** window, click on **Environment Variables**.
    3. Under **User variables**, click **New**.
    4. Set the **Variable name** to **VPS_AUTH_TOKEN** and the Variable value to <TOKEN>.
    5. Click **OK** to save.

Once you’ve set the environment variable, you can proceed with using the SDK, as it will automatically pick up the token from the environment for authentication.





## Getting Started

To get started with the Vipas AI Python SDK, you need to create a ModelClient object and use it to make predictions. Below is a step-by-step guide on how to do this.

### `vipas.model.ModelClient.predict(model_id: str, input_data: str, async_mode: bool = True) → dict`

Make a prediction using a deployed model.

#### Parameters:
- `model_id` (str): The unique identifier of the model.
- `input_data` (Any): The input data for the prediction, usually in string format (e.g., base64 encoded image or text data).
- `async_mode` (bool): Whether to perform the prediction asynchronously (default: True).

#### Returns:
- `dict`: A dictionary containing the result of the prediction process.

### Basic Usage

1. Import the necessary modules:
```python
from vipas import model
```

2. Create a ModelClient object:
```python
vps_model_client = model.ModelClient()
```

3. Make a prediction:

```python
model_id = "<MODEL_ID>"
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>")
```

### Handling Exceptions
The SDK provides specific exceptions to handle different error scenarios:

1. UnauthorizedException: Raised when the API key is invalid or missing.
2. NotFoundException: Raised when the model is not found.
3. BadRequestException: Raised when the input data is invalid.
4. ForbiddenException: Raised when the user does not have permission to access the model.
5. ConnectionException: Raised when there is a connection error.
6. RateLimitException: Raised when the rate limit is exceeded.
7. ClientException: Raised when there is a client error.

### Asynchronous Inference Mode
---
Asynchronous Inference Mode is a near-real-time inference option that queues incoming requests and processes them asynchronously. This mode is suitable when you need to handle `large payloads` as they arrive or run models with long inference processing times that do not require sub-second latency. `By default, the predict method operates in asynchronous mode`, which will poll the status endpoint until the result is ready. This is ideal for batch processing or tasks where immediate responses are not critical.


#### Asynchronous Inference Mode Example
```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=True)
```
### Real-Time Inference Mode
---
Real-Time Inference Mode is designed for use cases requiring real-time predictions. In this mode, the predict method processes the request immediately and returns the result without polling the status endpoint. This mode is ideal for applications that need quick, real-time responses and can afford to handle potential timeouts for long-running inferences. It is particularly suitable for interactive applications where users expect immediate feedback.

#### Real-Time Inference Mode Example
```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=False)
```

### Detailed Explanation
#### Asynchronous Inference Mode
---
##### Description:
This mode allows the system to handle requests by queuing them and processing them as resources become available. It is beneficial for scenarios where the inference task might take longer to process, and an immediate response is not necessary.

##### Behavior:
The system polls the status endpoint to check if the result is ready and returns the result once processing is complete.

##### Ideal For:
Batch processing, large payloads, long-running inference tasks.

##### Default Setting:
By default, async_mode is set to True to support heavier inference requests.

##### Example Usage:

```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=True)
```

#### Real-Time Inference Mode
---
##### Description:
This mode is intended for use cases that require immediate results. The system processes the request directly and returns the result without polling.

##### Behavior:
The request is processed immediately, and the result is returned. If the inference takes longer than 29 seconds, a 504 Gateway Timeout error is returned.

##### Ideal For:
Applications requiring sub-second latency, interactive applications needing immediate feedback.

##### Example Usage:

```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=False)
```

By understanding and choosing the appropriate mode for your use case, you can optimize the performance and responsiveness of your AI applications on Vipas.AI.


### Example Usage for ModelClient using asychronous inference mode

```python
from vipas import model
from vipas.exceptions import UnauthorizedException, NotFoundException, ClientException
from vipas.logger import LoggerClient

logger = LoggerClient(__name__)

def main():
    # Create a ModelClient object
    vps_model_client = model.ModelClient()

    # Make a prediction
    try:
        model_id = "<MODEL_ID>"
        api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>")
        logger.info(f"Prediction response: {api_response}")
    except UnauthorizedException as err:
        logger.error(f"UnauthorizedException: {err}")
    except NotFoundException as err:
        logger.error(f"NotFoundException: {err}")
    except ClientException as err:
        logger.error(f"ClientException: {err}")

main()

```
## Publishing Model
The Vipas.AI SDK provides a simple and powerful interface for developers to publish, manage, and deploy AI models. With this SDK, developers can upload their models, configure model processors, and deploy them to the Vipas platform seamlessly. This documentation will guide you through the process of using the SDK to publish and manage models built on various machine learning frameworks, including TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn, and more.


### Getting Started
---
### `vipas.model.ModelClient.publish(model_id: str, model_folder_path: str, model_framework_type: str, onnx_config_path: Optional[str] = None, processor_folder_path: Optional[str] = None, processor_image: Optional[str] = None, auto_launch: bool = True, override_model: bool = True) → dict`

Publish a model to the Vipas AI platform.

#### Parameters:
- `model_id` (str): The unique identifier of the model.
- `model_folder_path` (str): The path to the folder containing the model files.
- `model_framework_type` (str): The framework type of the model (e.g., 'tensorflow', 'pytorch', etc.).
- `onnx_config_path` (Optional[str]): The path to the ONNX config file (if applicable).
- `processor_folder_path` (Optional[str]): The path to the processor folder (if using a custom processor).
- `processor_image` (Optional[str]): The Docker image to use for the processor.
- `auto_launch` (bool): Whether to automatically launch the model after publishing (default: True).
- `override_model` (bool): Whether to override the existing model (default: True).

#### Returns:
- `dict`: A dictionary containing the status and details of the model publishing process.

Here is a basic example of how to use the SDK to publish a model from any remote environment:

```python
from vipas.model import ModelClient
from vipas.exceptions import UnauthorizedException, NotFoundException, ClientException


# Paths to model and processor files
model_folder_path = "/path/to/your/model"
onnx_config_path = "/path/to/config/config.pbtxt"  # Optional, depends on framework
processor_folder_path = "/path/to/your/processor"

# Unique model ID to identify the model in Vipas.AI
model_id = "your_model_id" # mdl-xxxxxxxxx

try:
    # Initialize the ModelClient
    model_client = ModelClient()

    # Publish the model
    model_client.publish(
        model_id=model_id,
        model_folder_path=model_folder_path,
        model_framework_type="tensorflow",  # Supported: tensorflow, pytorch, onnx, xgboost, sklearn, etc.
        onnx_config_path=onnx_config_path,  # Required for the ONNX model framework        
        processor_folder_path=processor_folder_path,  # Optional if using custom processors
        processor_image="your-processor-image:latest",  # allowed value are [“vps-processor-base:1.0”]
        auto_launch=True,  # Whether to automatically launch the model after upload, Default True
        override_model=True  # Whether to override existing model deployments, Default True
    )
except UnauthorizedException as e:
    print(f"UnauthorizedException: {e}")
except NotFoundException as e:
    print(f"NotFoundException: {e}")
except ClientException as e:
    print(f"ClientException: {e}")
except Exception as e:
    print(f"Exception: {e}")
```

### Publishing Process Overview
---
When you publish a model using the Vipas SDK, the following steps occur behind the scenes:
1. **Model Upload**: The SDK uploads the model files from the specified directory. The total size of the files is calculated, and the upload process is logged step-by-step.
2. **Processor Upload (Optional)**: If you are using a custom processor (a custom Python script), the SDK uploads the processor files. This step is optional but can be critical for advanced use cases where model input needs specific transformations.
3. **Processor Staging(Optional)**: After the processor upload, the processor will get staged if the files are properly uploaded.
4. **Model Staging And Building Processor**: Once the model and its associated files (including the processor, if applicable) are uploaded, the model is placed in a staging state. This stage ensures that all files are correctly uploaded and prepares the model for deployment.
5. **Model Launch (Optional)**: If the auto_launch parameter is set to True, the model will be automatically launched. This means that the model will be deployed and become available for real-time and asynchronous inference. The launch status is logged until the process is completed successfully.
6. **Rollback Mechanism**: If a model is already deployed and a new version is being uploaded, the SDK ensures that the previous version is rolled back in case of any issues during the new model deployment. 
> **Note:** The Rollback Mechanism will not occur if you make override_model=False.

#### Key parameters
1. **model_id**: The unique identifier for the model. This ID is used to track the model across the platform.
2. **model_folder_path**: The path to the directory containing the model files that need to be uploaded.
3. **model_framework_type**: The framework used for the model (e.g., TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn). Each framework has its own nuances in terms of model configuration.
4. **onnx_config_path[Optional]**: The path to the ONNX configuration file required by the ONNX framework. 
5. **processor_folder_path[Optional]**: The path to the folder containing custom processor file, such as Python script, if applicable. Optional if using a processor.
6. **processor_image[Optional]**: The Docker base image for the processor. Currently supporting “vps-processor-base:1.0”.
7. **auto_launch[Default: True]**: A boolean flag indicating whether to automatically launch the model after publishing. Default is True.
8. **override_model[Default: True]**: A boolean flag indicating whether to override any existing model deployment. Default is True.

#### Supported Frameworks
The SDK supports the following machine learning frameworks:
1. TensorFlow: Native TensorFlow SavedModel format.
2. PyTorch: Model files saved as .pt or .pth.
3. ONNX: ONNX models typically require a configuration file with extensions like (.pbtxt, .config, .txt) for setting input and output shapes.
4. XGBoost: For tree-based models exported from XGBoost.
5. Scikit-learn: For traditional machine learning models exported from scikit-learn.

> ⚠️ **Note:** For ONNX models, you must provide an ONNX configuration file with extensions like `.pbtxt`, `.config`, or `.txt` that describe the input-output mapping.
> 
> Below is an example ONNX configuration for input and output details needed by the model:
> 
> ```yaml
> input [
>  {
>    name: "input1"  # Name of the input going to the model (input tensor)
>    data_type: TYPE_FP32  # Data type of the input, FP32 stands for 32-bit floating point (commonly used in deep learning)
>    dims: [1, 3, 224, 224]  # Dimensions of the input tensor: [Batch size, Channels, Height, Width]
>  }
> ]
> output [
>  {
>    name: "output1"  # Name of the output from the model (output tensor)
>    data_type: TYPE_FP32  # Data type of the output, FP32 represents 32-bit floating point
>    dims: [1, 3, 224, 224]  # Dimensions of the output tensor: [Batch size, Channels, Height, Width]
>  }
> ]
> ```

#### Expected Behavior
1. **Successful Upload**: The model and processor files will be uploaded, and the model will be placed in the staged state.
2. **Automatic Launch**: If auto_launch=True, the model will be launched after the upload completes, making it available for real-time and asynchronous inference.
3. **Override of Existing Models**: If a model with the same model_id is already deployed, the new model will override the previous deployment if override_model=True.

#### Logs Example
Once you run the publish() method, you can expect logs similar to the following:
```bash
2024-10-08 16:15:15,043 - vipas.model - INFO - Publishing model mdl-ikas2ot2ohsux with framework type onnx.
2024-10-08 16:15:19,818 - vipas.model - INFO - File processor.py uploaded successfully.
2024-10-08 16:16:22,952 - vipas.model - INFO - Model mdl-ikas2ot2ohsux and related processor launched successfully.
```

This log sequence shows the entire process of publishing the model, uploading the processor, and successfully launching the model. Any errors or warnings will also be captured in the logs, which can help troubleshoot issues.

## Evaluating a Model against a Challenge
The Vipas.AI SDK provides functionality to evaluate your models against specific challenges hosted on the Vipas platform. The evaluate function allows you to submit a model for evaluation against a challenge and track its progress until completion.

### Key Features of the evaluate Function:
---
1. **Model and Challenge Pairing**: You must provide both a model_id and a challenge_id to evaluate your model against a particular challenge.
2. **Progress Tracking**: The SDK tracks the progress of the evaluation in the background and logs the status at regular intervals.
3. **Error Handling**: Specific exceptions like ClientException and general exceptions are captured and handled to ensure smooth operations.


### Basic Usage
---
### `vipas.model.ModelClient.evaluate(model_id: str, challenge_id: str) → dict`

Evaluate a model against a challenge.

#### Parameters:
- `model_id` (str): The unique identifier of the model.
- `challenge_id` (str): The unique identifier of the challenge.

#### Returns:
- `dict`: A dictionary containing the result of the model evaluation process.

Here's a basic example demonstrating how to evaluate a model against a challenge using the Vipas.AI SDK:
```python
from vipas.model import ModelClient
from vipas.exceptions import ClientException
from vipas import config, _rest

try:
    model_id = "mdl-bosb93njhjc97"  # Replace with your model ID
    challenge_id = "chg-2bg7oqy4halgi"  # Replace with the challenge ID

    # Create a ModelClient instance
    model_client = ModelClient()

    # Call the evaluate method to submit the model for evaluation against the challenge
    response = model_client.evaluate(model_id=model_id, challenge_id=challenge_id)

    print(response)

except ClientException as e:
    print(f"ClientException occurred: {e}")
except Exception as e:
    print(f"An unexpected error occurred: {e}")
```

### Logging Example for evaluate
The SDK logs detailed information about the evaluation process, including the model ID and challenge ID being evaluated, as well as the progress of the evaluation. Below is an example of the log output:
```bash
2024-10-17 15:25:19,706 - vipas.model - INFO - Evaluating model mdl-bosb93njhjc97 against the challenge chg-2bg7oqy4halgi.
2024-10-17 15:25:20,472 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.
2024-10-17 15:25:28,261 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.
2024-10-17 15:26:10,805 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi completed successfully.
```
In this log sequence:

* The evaluation process begins by logging the model ID and challenge ID.
* The progress of the evaluation is tracked and logged at regular intervals.
* Finally, upon successful completion, a message indicates the evaluation was successful.

### Handling the Response
---
The response returned from the evaluate function contains detailed information about the evaluation, including:

* Evaluation status (e.g., inprogress, completed, failed).
* Any associated results or metrics generated during the evaluation process.
* Potential error messages, if the evaluation encounters any issues.

By integrating the evaluate function into your workflow, you can efficiently evaluate your models against challenges on the Vipas platform and gain insights into their performance.

## Logging
The SDK provides a LoggerClient class to handle logging. Here's how you can use it:

### LoggerClient Usage

1. Import the `LoggerClient` class:
```python
from vipas.logger import LoggerClient
```

2. Initialize the `LoggerClient`:
```python
logger = LoggerClient(__name__)
```

3. Log messages at different levels:
```python
logger.debug("This is a debug message")
logger.info("This is an info message")
logger.warning("This is a warning message")
logger.error("This is an error message")
logger.critical("This is a critical message")

```

### Example of LoggerClient
Here is a complete example demonstrating the usage of the LoggerClient:

```python
from vipas.logger import LoggerClient

def main():
    logger = LoggerClient(__name__)
    
    logger.info("Starting the main function")
    
    try:
        # Example operation
        result = 10 / 2
        logger.debug(f"Result of division: {result}")
    except ZeroDivisionError as e:
        logger.error("Error occurred: Division by zero")
    except Exception as e:
        logger.critical(f"Unexpected error: {str(e)}")
    finally:
        logger.info("End of the main function")

main()
``` 

## Author
VIPAS.AI

## License
This project is licensed under the terms of the [vipas.ai license](LICENSE.md).

By following the above guidelines, you can effectively use the VIPAS AI Python SDK to interact with the VIPAS AI platform for making predictions, handling exceptions, and logging activities.





            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/vipas-engineering/vipas-python-sdk",
    "name": "vipas",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "Vipas Team",
    "author_email": "contact@vipas.ai",
    "download_url": "https://files.pythonhosted.org/packages/d7/6b/dc217d4e43a4d5d5189ee291a725936cf4e2378cfda9a5ba5d150cdf368e/vipas-1.0.4.tar.gz",
    "platform": null,
    "description": "# VIPAS AI Platform SDK\nThe Vipas AI Python SDK provides a simple and intuitive interface to interact with the Vipas AI platform. This SDK allows you to easily make predictions using pre-trained models hosted on the Vipas AI platform.\n\n## Table of Contents\n\n- [VIPAS AI Platform SDK](#vipas-ai-platform-sdk)\n  - [Requirements](#requirements)\n  - [Installation & Usage](#installation--usage)\n    - [pip install](#pip-install)\n  - [Prerequisites](#prerequisites)\n    - [Step 1: Fetching the Auth Token](#step-1-fetching-the-auth-token)\n    - [Step 2: Setting the Auth Token as an Environment Variable](#step-2-setting-the-auth-token-as-an-environment-variable)\n  - [Getting Started](#getting-started)\n    - [Basic Usage](#basic-usage)\n    - [Handling Exceptions](#handling-exceptions)\n  - [Asynchronous Inference Mode](#asynchronous-inference-mode)\n    - [Asynchronous Inference Mode Example](#asynchronous-inference-mode-example)\n  - [Real-Time Inference Mode](#real-time-inference-mode)\n    - [Real-Time Inference Mode Example](#real-time-inference-mode-example)\n  - [Publishing Model](#publishing-model)\n    - [Publishing Process Overview](#publishing-process-overview)\n  - [Evaluating a Model against a Challenge](#evaluating-a-model-against-a-challenge)\n  - [Logging](#logging)\n    - [LoggerClient Usage](#loggerclient-usage)\n    - [Example of LoggerClient](#example-of-loggerclient)\n  - [License](#license)\n\n## Requirements.\n\nPython 3.7+\n\n## Installation & Usage\n### pip install\n\nYou can install vipas sdk from the pip repository, using the following command:\n\n```sh\npip install vipas\n```\n(you may need to run `pip` with root permission: `sudo pip install vipas`)\n\nThen import the package:\n```python\nimport vipas\n```\n## Prerequisites\nBefore using the Vipas.AI SDK to manage and publish models, you need to fetch your VPS Auth Token from the Vipas.AI platform and configure it as an environment variable.\n\n#### Step 1: Fetching the Auth Token:-\n1. Login to Vipas.AI: Go to the [Vipas.AI](https://vipas.ai) platform and log in to your account.\n2. Navigate to Settings: Click on your user profile icon in the top right corner and navigate to Settings.\nGenerate Auth Token: In the settings, locate the **Temporary Access Token** section, enter your password. \n3. Click the button to generate a new Auth Token.\nCopy the Token: Once generated, copy the token. This token is required to authenticate your SDK requests.\n\n#### Step 2: Setting the Auth Token as an Environment Variable:-\nYou need to set the VPS_AUTH_TOKEN as an environment variable to use it within your SDK.\n\n##### For linux and macOS\n1. Open a **Terminal**.\n2. Run the following command to export the token:\n\n    ```bash\n    export VPS_AUTH_TOKEN=<TOKEN>\n    ```\n   Replace <TOKEN> with the actual token you copied from the Vipas.AI UI.\n3. To make it persistent across sessions, add the following line to your **~/.bashrc, ~/.zshrc**, or the corresponding shell configuration file\n\n    ```bash\n    export VPS_AUTH_TOKEN=<TOKEN>\n    ```\n    Then use this command to source it to the current running \n    session\n    ```bash\n    source ~/.bashrc.\n    ```\n##### For Windows\n1. Open **Command Prompt** or **PowerShell**.\n2. Run the following command to set the token for the current session:\n    ```powershell\n    set VPS_AUTH_TOKEN=<TOKEN>\n    ```\n3. To set it permanently, follow these steps:\n    1. Open the Start menu, search for **Environment Variables**, and open the **Edit the system environment variables** option.\n    2. In the **System Properties** window, click on **Environment Variables**.\n    3. Under **User variables**, click **New**.\n    4. Set the **Variable name** to **VPS_AUTH_TOKEN** and the Variable value to <TOKEN>.\n    5. Click **OK** to save.\n\nOnce you\u2019ve set the environment variable, you can proceed with using the SDK, as it will automatically pick up the token from the environment for authentication.\n\n\n\n\n\n## Getting Started\n\nTo get started with the Vipas AI Python SDK, you need to create a ModelClient object and use it to make predictions. Below is a step-by-step guide on how to do this.\n\n### `vipas.model.ModelClient.predict(model_id: str, input_data: str, async_mode: bool = True) \u2192 dict`\n\nMake a prediction using a deployed model.\n\n#### Parameters:\n- `model_id` (str): The unique identifier of the model.\n- `input_data` (Any): The input data for the prediction, usually in string format (e.g., base64 encoded image or text data).\n- `async_mode` (bool): Whether to perform the prediction asynchronously (default: True).\n\n#### Returns:\n- `dict`: A dictionary containing the result of the prediction process.\n\n### Basic Usage\n\n1. Import the necessary modules:\n```python\nfrom vipas import model\n```\n\n2. Create a ModelClient object:\n```python\nvps_model_client = model.ModelClient()\n```\n\n3. Make a prediction:\n\n```python\nmodel_id = \"<MODEL_ID>\"\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\")\n```\n\n### Handling Exceptions\nThe SDK provides specific exceptions to handle different error scenarios:\n\n1. UnauthorizedException: Raised when the API key is invalid or missing.\n2. NotFoundException: Raised when the model is not found.\n3. BadRequestException: Raised when the input data is invalid.\n4. ForbiddenException: Raised when the user does not have permission to access the model.\n5. ConnectionException: Raised when there is a connection error.\n6. RateLimitException: Raised when the rate limit is exceeded.\n7. ClientException: Raised when there is a client error.\n\n### Asynchronous Inference Mode\n---\nAsynchronous Inference Mode is a near-real-time inference option that queues incoming requests and processes them asynchronously. This mode is suitable when you need to handle `large payloads` as they arrive or run models with long inference processing times that do not require sub-second latency. `By default, the predict method operates in asynchronous mode`, which will poll the status endpoint until the result is ready. This is ideal for batch processing or tasks where immediate responses are not critical.\n\n\n#### Asynchronous Inference Mode Example\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=True)\n```\n### Real-Time Inference Mode\n---\nReal-Time Inference Mode is designed for use cases requiring real-time predictions. In this mode, the predict method processes the request immediately and returns the result without polling the status endpoint. This mode is ideal for applications that need quick, real-time responses and can afford to handle potential timeouts for long-running inferences. It is particularly suitable for interactive applications where users expect immediate feedback.\n\n#### Real-Time Inference Mode Example\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=False)\n```\n\n### Detailed Explanation\n#### Asynchronous Inference Mode\n---\n##### Description:\nThis mode allows the system to handle requests by queuing them and processing them as resources become available. It is beneficial for scenarios where the inference task might take longer to process, and an immediate response is not necessary.\n\n##### Behavior:\nThe system polls the status endpoint to check if the result is ready and returns the result once processing is complete.\n\n##### Ideal For:\nBatch processing, large payloads, long-running inference tasks.\n\n##### Default Setting:\nBy default, async_mode is set to True to support heavier inference requests.\n\n##### Example Usage:\n\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=True)\n```\n\n#### Real-Time Inference Mode\n---\n##### Description:\nThis mode is intended for use cases that require immediate results. The system processes the request directly and returns the result without polling.\n\n##### Behavior:\nThe request is processed immediately, and the result is returned. If the inference takes longer than 29 seconds, a 504 Gateway Timeout error is returned.\n\n##### Ideal For:\nApplications requiring sub-second latency, interactive applications needing immediate feedback.\n\n##### Example Usage:\n\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=False)\n```\n\nBy understanding and choosing the appropriate mode for your use case, you can optimize the performance and responsiveness of your AI applications on Vipas.AI.\n\n\n### Example Usage for ModelClient using asychronous inference mode\n\n```python\nfrom vipas import model\nfrom vipas.exceptions import UnauthorizedException, NotFoundException, ClientException\nfrom vipas.logger import LoggerClient\n\nlogger = LoggerClient(__name__)\n\ndef main():\n    # Create a ModelClient object\n    vps_model_client = model.ModelClient()\n\n    # Make a prediction\n    try:\n        model_id = \"<MODEL_ID>\"\n        api_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\")\n        logger.info(f\"Prediction response: {api_response}\")\n    except UnauthorizedException as err:\n        logger.error(f\"UnauthorizedException: {err}\")\n    except NotFoundException as err:\n        logger.error(f\"NotFoundException: {err}\")\n    except ClientException as err:\n        logger.error(f\"ClientException: {err}\")\n\nmain()\n\n```\n## Publishing Model\nThe Vipas.AI SDK provides a simple and powerful interface for developers to publish, manage, and deploy AI models. With this SDK, developers can upload their models, configure model processors, and deploy them to the Vipas platform seamlessly. This documentation will guide you through the process of using the SDK to publish and manage models built on various machine learning frameworks, including TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn, and more.\n\n\n### Getting Started\n---\n### `vipas.model.ModelClient.publish(model_id: str, model_folder_path: str, model_framework_type: str, onnx_config_path: Optional[str] = None, processor_folder_path: Optional[str] = None, processor_image: Optional[str] = None, auto_launch: bool = True, override_model: bool = True) \u2192 dict`\n\nPublish a model to the Vipas AI platform.\n\n#### Parameters:\n- `model_id` (str): The unique identifier of the model.\n- `model_folder_path` (str): The path to the folder containing the model files.\n- `model_framework_type` (str): The framework type of the model (e.g., 'tensorflow', 'pytorch', etc.).\n- `onnx_config_path` (Optional[str]): The path to the ONNX config file (if applicable).\n- `processor_folder_path` (Optional[str]): The path to the processor folder (if using a custom processor).\n- `processor_image` (Optional[str]): The Docker image to use for the processor.\n- `auto_launch` (bool): Whether to automatically launch the model after publishing (default: True).\n- `override_model` (bool): Whether to override the existing model (default: True).\n\n#### Returns:\n- `dict`: A dictionary containing the status and details of the model publishing process.\n\nHere is a basic example of how to use the SDK to publish a model from any remote environment:\n\n```python\nfrom vipas.model import ModelClient\nfrom vipas.exceptions import UnauthorizedException, NotFoundException, ClientException\n\n\n# Paths to model and processor files\nmodel_folder_path = \"/path/to/your/model\"\nonnx_config_path = \"/path/to/config/config.pbtxt\"  # Optional, depends on framework\nprocessor_folder_path = \"/path/to/your/processor\"\n\n# Unique model ID to identify the model in Vipas.AI\nmodel_id = \"your_model_id\" # mdl-xxxxxxxxx\n\ntry:\n    # Initialize the ModelClient\n    model_client = ModelClient()\n\n    # Publish the model\n    model_client.publish(\n        model_id=model_id,\n        model_folder_path=model_folder_path,\n        model_framework_type=\"tensorflow\",  # Supported: tensorflow, pytorch, onnx, xgboost, sklearn, etc.\n        onnx_config_path=onnx_config_path,  # Required for the ONNX model framework        \n        processor_folder_path=processor_folder_path,  # Optional if using custom processors\n        processor_image=\"your-processor-image:latest\",  # allowed value are [\u201cvps-processor-base:1.0\u201d]\n        auto_launch=True,  # Whether to automatically launch the model after upload, Default True\n        override_model=True  # Whether to override existing model deployments, Default True\n    )\nexcept UnauthorizedException as e:\n    print(f\"UnauthorizedException: {e}\")\nexcept NotFoundException as e:\n    print(f\"NotFoundException: {e}\")\nexcept ClientException as e:\n    print(f\"ClientException: {e}\")\nexcept Exception as e:\n    print(f\"Exception: {e}\")\n```\n\n### Publishing Process Overview\n---\nWhen you publish a model using the Vipas SDK, the following steps occur behind the scenes:\n1. **Model Upload**: The SDK uploads the model files from the specified directory. The total size of the files is calculated, and the upload process is logged step-by-step.\n2. **Processor Upload (Optional)**: If you are using a custom processor (a custom Python script), the SDK uploads the processor files. This step is optional but can be critical for advanced use cases where model input needs specific transformations.\n3. **Processor Staging(Optional)**: After the processor upload, the processor will get staged if the files are properly uploaded.\n4. **Model Staging And Building Processor**: Once the model and its associated files (including the processor, if applicable) are uploaded, the model is placed in a staging state. This stage ensures that all files are correctly uploaded and prepares the model for deployment.\n5. **Model Launch (Optional)**: If the auto_launch parameter is set to True, the model will be automatically launched. This means that the model will be deployed and become available for real-time and asynchronous inference. The launch status is logged until the process is completed successfully.\n6. **Rollback Mechanism**: If a model is already deployed and a new version is being uploaded, the SDK ensures that the previous version is rolled back in case of any issues during the new model deployment. \n> **Note:** The Rollback Mechanism will not occur if you make override_model=False.\n\n#### Key parameters\n1. **model_id**: The unique identifier for the model. This ID is used to track the model across the platform.\n2. **model_folder_path**: The path to the directory containing the model files that need to be uploaded.\n3. **model_framework_type**: The framework used for the model (e.g., TensorFlow, PyTorch, ONNX, XGBoost, Scikit-learn). Each framework has its own nuances in terms of model configuration.\n4. **onnx_config_path[Optional]**: The path to the ONNX configuration file required by the ONNX framework. \n5. **processor_folder_path[Optional]**: The path to the folder containing custom processor file, such as Python script, if applicable. Optional if using a processor.\n6. **processor_image[Optional]**: The Docker base image for the processor. Currently supporting \u201cvps-processor-base:1.0\u201d.\n7. **auto_launch[Default: True]**: A boolean flag indicating whether to automatically launch the model after publishing. Default is True.\n8. **override_model[Default: True]**: A boolean flag indicating whether to override any existing model deployment. Default is True.\n\n#### Supported Frameworks\nThe SDK supports the following machine learning frameworks:\n1. TensorFlow: Native TensorFlow SavedModel format.\n2. PyTorch: Model files saved as .pt or .pth.\n3. ONNX: ONNX models typically require a configuration file with extensions like (.pbtxt, .config, .txt) for setting input and output shapes.\n4. XGBoost: For tree-based models exported from XGBoost.\n5. Scikit-learn: For traditional machine learning models exported from scikit-learn.\n\n> \u26a0\ufe0f **Note:** For ONNX models, you must provide an ONNX configuration file with extensions like `.pbtxt`, `.config`, or `.txt` that describe the input-output mapping.\n> \n> Below is an example ONNX configuration for input and output details needed by the model:\n> \n> ```yaml\n> input [\n>  {\n>    name: \"input1\"  # Name of the input going to the model (input tensor)\n>    data_type: TYPE_FP32  # Data type of the input, FP32 stands for 32-bit floating point (commonly used in deep learning)\n>    dims: [1, 3, 224, 224]  # Dimensions of the input tensor: [Batch size, Channels, Height, Width]\n>  }\n> ]\n> output [\n>  {\n>    name: \"output1\"  # Name of the output from the model (output tensor)\n>    data_type: TYPE_FP32  # Data type of the output, FP32 represents 32-bit floating point\n>    dims: [1, 3, 224, 224]  # Dimensions of the output tensor: [Batch size, Channels, Height, Width]\n>  }\n> ]\n> ```\n\n#### Expected Behavior\n1. **Successful Upload**: The model and processor files will be uploaded, and the model will be placed in the staged state.\n2. **Automatic Launch**: If auto_launch=True, the model will be launched after the upload completes, making it available for real-time and asynchronous inference.\n3. **Override of Existing Models**: If a model with the same model_id is already deployed, the new model will override the previous deployment if override_model=True.\n\n#### Logs Example\nOnce you run the publish() method, you can expect logs similar to the following:\n```bash\n2024-10-08 16:15:15,043 - vipas.model - INFO - Publishing model mdl-ikas2ot2ohsux with framework type onnx.\n2024-10-08 16:15:19,818 - vipas.model - INFO - File processor.py uploaded successfully.\n2024-10-08 16:16:22,952 - vipas.model - INFO - Model mdl-ikas2ot2ohsux and related processor launched successfully.\n```\n\nThis log sequence shows the entire process of publishing the model, uploading the processor, and successfully launching the model. Any errors or warnings will also be captured in the logs, which can help troubleshoot issues.\n\n## Evaluating a Model against a Challenge\nThe Vipas.AI SDK provides functionality to evaluate your models against specific challenges hosted on the Vipas platform. The evaluate function allows you to submit a model for evaluation against a challenge and track its progress until completion.\n\n### Key Features of the evaluate Function:\n---\n1. **Model and Challenge Pairing**: You must provide both a model_id and a challenge_id to evaluate your model against a particular challenge.\n2. **Progress Tracking**: The SDK tracks the progress of the evaluation in the background and logs the status at regular intervals.\n3. **Error Handling**: Specific exceptions like ClientException and general exceptions are captured and handled to ensure smooth operations.\n\n\n### Basic Usage\n---\n### `vipas.model.ModelClient.evaluate(model_id: str, challenge_id: str) \u2192 dict`\n\nEvaluate a model against a challenge.\n\n#### Parameters:\n- `model_id` (str): The unique identifier of the model.\n- `challenge_id` (str): The unique identifier of the challenge.\n\n#### Returns:\n- `dict`: A dictionary containing the result of the model evaluation process.\n\nHere's a basic example demonstrating how to evaluate a model against a challenge using the Vipas.AI SDK:\n```python\nfrom vipas.model import ModelClient\nfrom vipas.exceptions import ClientException\nfrom vipas import config, _rest\n\ntry:\n    model_id = \"mdl-bosb93njhjc97\"  # Replace with your model ID\n    challenge_id = \"chg-2bg7oqy4halgi\"  # Replace with the challenge ID\n\n    # Create a ModelClient instance\n    model_client = ModelClient()\n\n    # Call the evaluate method to submit the model for evaluation against the challenge\n    response = model_client.evaluate(model_id=model_id, challenge_id=challenge_id)\n\n    print(response)\n\nexcept ClientException as e:\n    print(f\"ClientException occurred: {e}\")\nexcept Exception as e:\n    print(f\"An unexpected error occurred: {e}\")\n```\n\n### Logging Example for evaluate\nThe SDK logs detailed information about the evaluation process, including the model ID and challenge ID being evaluated, as well as the progress of the evaluation. Below is an example of the log output:\n```bash\n2024-10-17 15:25:19,706 - vipas.model - INFO - Evaluating model mdl-bosb93njhjc97 against the challenge chg-2bg7oqy4halgi.\n2024-10-17 15:25:20,472 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.\n2024-10-17 15:25:28,261 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi is in progress.\n2024-10-17 15:26:10,805 - vipas._rest - INFO - Evaluate model for model: mdl-bosb93njhjc97 against the challenge: chg-2bg7oqy4halgi completed successfully.\n```\nIn this log sequence:\n\n* The evaluation process begins by logging the model ID and challenge ID.\n* The progress of the evaluation is tracked and logged at regular intervals.\n* Finally, upon successful completion, a message indicates the evaluation was successful.\n\n### Handling the Response\n---\nThe response returned from the evaluate function contains detailed information about the evaluation, including:\n\n* Evaluation status (e.g., inprogress, completed, failed).\n* Any associated results or metrics generated during the evaluation process.\n* Potential error messages, if the evaluation encounters any issues.\n\nBy integrating the evaluate function into your workflow, you can efficiently evaluate your models against challenges on the Vipas platform and gain insights into their performance.\n\n## Logging\nThe SDK provides a LoggerClient class to handle logging. Here's how you can use it:\n\n### LoggerClient Usage\n\n1. Import the `LoggerClient` class:\n```python\nfrom vipas.logger import LoggerClient\n```\n\n2. Initialize the `LoggerClient`:\n```python\nlogger = LoggerClient(__name__)\n```\n\n3. Log messages at different levels:\n```python\nlogger.debug(\"This is a debug message\")\nlogger.info(\"This is an info message\")\nlogger.warning(\"This is a warning message\")\nlogger.error(\"This is an error message\")\nlogger.critical(\"This is a critical message\")\n\n```\n\n### Example of LoggerClient\nHere is a complete example demonstrating the usage of the LoggerClient:\n\n```python\nfrom vipas.logger import LoggerClient\n\ndef main():\n    logger = LoggerClient(__name__)\n    \n    logger.info(\"Starting the main function\")\n    \n    try:\n        # Example operation\n        result = 10 / 2\n        logger.debug(f\"Result of division: {result}\")\n    except ZeroDivisionError as e:\n        logger.error(\"Error occurred: Division by zero\")\n    except Exception as e:\n        logger.critical(f\"Unexpected error: {str(e)}\")\n    finally:\n        logger.info(\"End of the main function\")\n\nmain()\n``` \n\n## Author\nVIPAS.AI\n\n## License\nThis project is licensed under the terms of the [vipas.ai license](LICENSE.md).\n\nBy following the above guidelines, you can effectively use the VIPAS AI Python SDK to interact with the VIPAS AI platform for making predictions, handling exceptions, and logging activities.\n\n\n\n\n",
    "bugtrack_url": null,
    "license": " Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.  END OF TERMS AND CONDITIONS",
    "summary": "Python SDK for Vipas AI Platform",
    "version": "1.0.4",
    "project_urls": {
        "Homepage": "https://github.com/vipas-engineering/vipas-python-sdk"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "560306aecc7bfef2e05c8758575f0d7f23ebb64a93dca383a3677a84f2c11c5a",
                "md5": "f9b01e9e77eee1632be55040983663b5",
                "sha256": "813bfa390f2c9fe7ef4bf41cc4086f83aebc8dfda58a5eb8be81790187ec54e3"
            },
            "downloads": -1,
            "filename": "vipas-1.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f9b01e9e77eee1632be55040983663b5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 29787,
            "upload_time": "2024-10-18T12:57:04",
            "upload_time_iso_8601": "2024-10-18T12:57:04.033106Z",
            "url": "https://files.pythonhosted.org/packages/56/03/06aecc7bfef2e05c8758575f0d7f23ebb64a93dca383a3677a84f2c11c5a/vipas-1.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d76bdc217d4e43a4d5d5189ee291a725936cf4e2378cfda9a5ba5d150cdf368e",
                "md5": "77c6e8fce3396e3cea583518a81f67ea",
                "sha256": "f35938b0efba28618d723ab98992c62d36d411fabcc492d78f3d8542bbdfd6fe"
            },
            "downloads": -1,
            "filename": "vipas-1.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "77c6e8fce3396e3cea583518a81f67ea",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 35804,
            "upload_time": "2024-10-18T12:57:06",
            "upload_time_iso_8601": "2024-10-18T12:57:06.470029Z",
            "url": "https://files.pythonhosted.org/packages/d7/6b/dc217d4e43a4d5d5189ee291a725936cf4e2378cfda9a5ba5d150cdf368e/vipas-1.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-18 12:57:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vipas-engineering",
    "github_project": "vipas-python-sdk",
    "github_not_found": true,
    "lcname": "vipas"
}
        
Elapsed time: 0.37710s