vipas


Namevipas JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryPython SDK for Vipas AI Platform
upload_time2024-07-20 06:22:51
maintainerNone
docs_urlNone
authorVipas Team
requires_python>=3.7
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # VIPAS AI Platform SDK
The Vipas AI Python SDK provides a simple and intuitive interface to interact with the Vipas AI platform. This SDK allows you to easily make predictions using pre-trained models hosted on the Vipas AI platform.

## Requirements.

Python 3.7+

## Installation & Usage
### pip install

You can install vipas sdk from the pip repository, using the following command:

```sh
pip install vipas
```
(you may need to run `pip` with root permission: `sudo pip install vipas`)

Then import the package:
```python
import vipas
```

## Getting Started

To get started with the Vipas AI Python SDK, you need to create a ModelClient object and use it to make predictions. Below is a step-by-step guide on how to do this.

### Basic Usage

1. Import the necessary modules:
```python
from vipas import model
```

2. Create a ModelClient object:
```python
vps_model_client = model.ModelClient()
```

3. Make a prediction:

```python
model_id = "<MODEL_ID>"
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>")
```

### Handling Exceptions
The SDK provides specific exceptions to handle different error scenarios:

1. UnauthorizedException: Raised when the API key is invalid or missing.
2. NotFoundException: Raised when the model is not found.
3. BadRequestException: Raised when the input data is invalid.
4. ForbiddenException: Raised when the user does not have permission to access the model.
5. ConnectionException: Raised when there is a connection error.
6. RateLimitException: Raised when the rate limit is exceeded.
7. ClientException: Raised when there is a client error.

### Asynchronous Inference Mode
Asynchronous Inference Mode is a near-real-time inference option that queues incoming requests and processes them asynchronously. This mode is suitable when you need to handle `large payloads` as they arrive or run models with long inference processing times that do not require sub-second latency. `By default, the predict method operates in asynchronous mode`, which will poll the status endpoint until the result is ready. This is ideal for batch processing or tasks where immediate responses are not critical.


#### Asynchronous Inference Mode Example
```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=True)
```
### Real-Time Inference Mode
Real-Time Inference Mode is designed for use cases requiring real-time predictions. In this mode, the predict method processes the request immediately and returns the result without polling the status endpoint. This mode is ideal for applications that need quick, real-time responses and can afford to handle potential timeouts for long-running inferences. It is particularly suitable for interactive applications where users expect immediate feedback.

#### Real-Time Inference Mode Example
```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=False)
```

### Detailed Explanation
#### Asynchronous Inference Mode
##### Description:
This mode allows the system to handle requests by queuing them and processing them as resources become available. It is beneficial for scenarios where the inference task might take longer to process, and an immediate response is not necessary.

##### Behavior:
The system polls the status endpoint to check if the result is ready and returns the result once processing is complete.

##### Ideal For:
Batch processing, large payloads, long-running inference tasks.

##### Default Setting:
By default, async_mode is set to True to support heavier inference requests.

##### Example Usage:

```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=True)
```

#### Real-Time Inference Mode
##### Description:
This mode is intended for use cases that require immediate results. The system processes the request directly and returns the result without polling.

##### Behavior:
The request is processed immediately, and the result is returned. If the inference takes longer than 29 seconds, a 504 Gateway Timeout error is returned.

##### Ideal For:
Applications requiring sub-second latency, interactive applications needing immediate feedback.

##### Example Usage:

```python
api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>", async_mode=False)
```

By understanding and choosing the appropriate mode for your use case, you can optimize the performance and responsiveness of your AI applications on Vipas.AI.


### Example Usage for ModelClient using asychronous inference mode

```python
from vipas import model
from vipas.exceptions import UnauthorizedException, NotFoundException, ClientException
from vipas.logger import LoggerClient

logger = LoggerClient(__name__)

def main():
    # Create a ModelClient object
    vps_model_client = model.ModelClient()

    # Make a prediction
    try:
        model_id = "<MODEL_ID>"
        api_response = vps_model_client.predict(model_id=model_id, input_data="<INPUT_DATA>")
        logger.info(f"Prediction response: {api_response}")
    except UnauthorizedException as err:
        logger.error(f"UnauthorizedException: {err}")
    except NotFoundException as err:
        logger.error(f"NotFoundException: {err}")
    except ClientException as err:
        logger.error(f"ClientException: {err}")

main()

```

## Logging
The SDK provides a LoggerClient class to handle logging. Here's how you can use it:

### LoggerClient Usage

1. Import the `LoggerClient` class:
```python
from vipas.logger import LoggerClient
```

2. Initialize the `LoggerClient`:
```python
logger = LoggerClient(__name__)
```

3. Log messages at different levels:
```python
logger.debug("This is a debug message")
logger.info("This is an info message")
logger.warning("This is a warning message")
logger.error("This is an error message")
logger.critical("This is a critical message")

```

### Example of LoggerClient
Here is a complete example demonstrating the usage of the LoggerClient:

```python
from vipas.logger import LoggerClient

def main():
    logger = LoggerClient(__name__)
    
    logger.info("Starting the main function")
    
    try:
        # Example operation
        result = 10 / 2
        logger.debug(f"Result of division: {result}")
    except ZeroDivisionError as e:
        logger.error("Error occurred: Division by zero")
    except Exception as e:
        logger.critical(f"Unexpected error: {str(e)}")
    finally:
        logger.info("End of the main function")

main()
``` 

## Author
VIPAS.AI

## License
This project is licensed under the terms of the [vipas.ai license](LICENSE.md).

By following the above guidelines, you can effectively use the VIPAS AI Python SDK to interact with the VIPAS AI platform for making predictions, handling exceptions, and logging activities.





            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "vipas",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "Vipas Team",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/31/92/27e1960368615cff90edb18aa29cfd6417de73d73f29364aacdd6c76d5ab/vipas-1.0.0.tar.gz",
    "platform": null,
    "description": "# VIPAS AI Platform SDK\nThe Vipas AI Python SDK provides a simple and intuitive interface to interact with the Vipas AI platform. This SDK allows you to easily make predictions using pre-trained models hosted on the Vipas AI platform.\n\n## Requirements.\n\nPython 3.7+\n\n## Installation & Usage\n### pip install\n\nYou can install vipas sdk from the pip repository, using the following command:\n\n```sh\npip install vipas\n```\n(you may need to run `pip` with root permission: `sudo pip install vipas`)\n\nThen import the package:\n```python\nimport vipas\n```\n\n## Getting Started\n\nTo get started with the Vipas AI Python SDK, you need to create a ModelClient object and use it to make predictions. Below is a step-by-step guide on how to do this.\n\n### Basic Usage\n\n1. Import the necessary modules:\n```python\nfrom vipas import model\n```\n\n2. Create a ModelClient object:\n```python\nvps_model_client = model.ModelClient()\n```\n\n3. Make a prediction:\n\n```python\nmodel_id = \"<MODEL_ID>\"\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\")\n```\n\n### Handling Exceptions\nThe SDK provides specific exceptions to handle different error scenarios:\n\n1. UnauthorizedException: Raised when the API key is invalid or missing.\n2. NotFoundException: Raised when the model is not found.\n3. BadRequestException: Raised when the input data is invalid.\n4. ForbiddenException: Raised when the user does not have permission to access the model.\n5. ConnectionException: Raised when there is a connection error.\n6. RateLimitException: Raised when the rate limit is exceeded.\n7. ClientException: Raised when there is a client error.\n\n### Asynchronous Inference Mode\nAsynchronous Inference Mode is a near-real-time inference option that queues incoming requests and processes them asynchronously. This mode is suitable when you need to handle `large payloads` as they arrive or run models with long inference processing times that do not require sub-second latency. `By default, the predict method operates in asynchronous mode`, which will poll the status endpoint until the result is ready. This is ideal for batch processing or tasks where immediate responses are not critical.\n\n\n#### Asynchronous Inference Mode Example\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=True)\n```\n### Real-Time Inference Mode\nReal-Time Inference Mode is designed for use cases requiring real-time predictions. In this mode, the predict method processes the request immediately and returns the result without polling the status endpoint. This mode is ideal for applications that need quick, real-time responses and can afford to handle potential timeouts for long-running inferences. It is particularly suitable for interactive applications where users expect immediate feedback.\n\n#### Real-Time Inference Mode Example\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=False)\n```\n\n### Detailed Explanation\n#### Asynchronous Inference Mode\n##### Description:\nThis mode allows the system to handle requests by queuing them and processing them as resources become available. It is beneficial for scenarios where the inference task might take longer to process, and an immediate response is not necessary.\n\n##### Behavior:\nThe system polls the status endpoint to check if the result is ready and returns the result once processing is complete.\n\n##### Ideal For:\nBatch processing, large payloads, long-running inference tasks.\n\n##### Default Setting:\nBy default, async_mode is set to True to support heavier inference requests.\n\n##### Example Usage:\n\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=True)\n```\n\n#### Real-Time Inference Mode\n##### Description:\nThis mode is intended for use cases that require immediate results. The system processes the request directly and returns the result without polling.\n\n##### Behavior:\nThe request is processed immediately, and the result is returned. If the inference takes longer than 29 seconds, a 504 Gateway Timeout error is returned.\n\n##### Ideal For:\nApplications requiring sub-second latency, interactive applications needing immediate feedback.\n\n##### Example Usage:\n\n```python\napi_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\", async_mode=False)\n```\n\nBy understanding and choosing the appropriate mode for your use case, you can optimize the performance and responsiveness of your AI applications on Vipas.AI.\n\n\n### Example Usage for ModelClient using asychronous inference mode\n\n```python\nfrom vipas import model\nfrom vipas.exceptions import UnauthorizedException, NotFoundException, ClientException\nfrom vipas.logger import LoggerClient\n\nlogger = LoggerClient(__name__)\n\ndef main():\n    # Create a ModelClient object\n    vps_model_client = model.ModelClient()\n\n    # Make a prediction\n    try:\n        model_id = \"<MODEL_ID>\"\n        api_response = vps_model_client.predict(model_id=model_id, input_data=\"<INPUT_DATA>\")\n        logger.info(f\"Prediction response: {api_response}\")\n    except UnauthorizedException as err:\n        logger.error(f\"UnauthorizedException: {err}\")\n    except NotFoundException as err:\n        logger.error(f\"NotFoundException: {err}\")\n    except ClientException as err:\n        logger.error(f\"ClientException: {err}\")\n\nmain()\n\n```\n\n## Logging\nThe SDK provides a LoggerClient class to handle logging. Here's how you can use it:\n\n### LoggerClient Usage\n\n1. Import the `LoggerClient` class:\n```python\nfrom vipas.logger import LoggerClient\n```\n\n2. Initialize the `LoggerClient`:\n```python\nlogger = LoggerClient(__name__)\n```\n\n3. Log messages at different levels:\n```python\nlogger.debug(\"This is a debug message\")\nlogger.info(\"This is an info message\")\nlogger.warning(\"This is a warning message\")\nlogger.error(\"This is an error message\")\nlogger.critical(\"This is a critical message\")\n\n```\n\n### Example of LoggerClient\nHere is a complete example demonstrating the usage of the LoggerClient:\n\n```python\nfrom vipas.logger import LoggerClient\n\ndef main():\n    logger = LoggerClient(__name__)\n    \n    logger.info(\"Starting the main function\")\n    \n    try:\n        # Example operation\n        result = 10 / 2\n        logger.debug(f\"Result of division: {result}\")\n    except ZeroDivisionError as e:\n        logger.error(\"Error occurred: Division by zero\")\n    except Exception as e:\n        logger.critical(f\"Unexpected error: {str(e)}\")\n    finally:\n        logger.info(\"End of the main function\")\n\nmain()\n``` \n\n## Author\nVIPAS.AI\n\n## License\nThis project is licensed under the terms of the [vipas.ai license](LICENSE.md).\n\nBy following the above guidelines, you can effectively use the VIPAS AI Python SDK to interact with the VIPAS AI platform for making predictions, handling exceptions, and logging activities.\n\n\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Python SDK for Vipas AI Platform",
    "version": "1.0.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ff13db20ad61333e508e1f8358910cf40a09aa0050ca0ac171191211628060cd",
                "md5": "220d6cde364b2a34fde35f6fada23d5d",
                "sha256": "d70dd161a1a9bd9a2e2db644a0bf5bb72b2d528fa92206b438193c2ad4b65529"
            },
            "downloads": -1,
            "filename": "vipas-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "220d6cde364b2a34fde35f6fada23d5d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 14807,
            "upload_time": "2024-07-20T06:22:49",
            "upload_time_iso_8601": "2024-07-20T06:22:49.405307Z",
            "url": "https://files.pythonhosted.org/packages/ff/13/db20ad61333e508e1f8358910cf40a09aa0050ca0ac171191211628060cd/vipas-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "319227e1960368615cff90edb18aa29cfd6417de73d73f29364aacdd6c76d5ab",
                "md5": "59d7ba331289feae14d46f36360696dc",
                "sha256": "1345ff241bf2016d329b1188af1a84d00c734c458126afc2283888d23cb15647"
            },
            "downloads": -1,
            "filename": "vipas-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "59d7ba331289feae14d46f36360696dc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 14588,
            "upload_time": "2024-07-20T06:22:51",
            "upload_time_iso_8601": "2024-07-20T06:22:51.290885Z",
            "url": "https://files.pythonhosted.org/packages/31/92/27e1960368615cff90edb18aa29cfd6417de73d73f29364aacdd6c76d5ab/vipas-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-20 06:22:51",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "vipas"
}
        
Elapsed time: 0.29069s