arize


Namearize JSON
Version 7.30.0 PyPI version JSON
download
home_pageNone
SummaryA helper library to interact with Arize AI APIs
upload_time2024-12-17 17:49:10
maintainerNone
docs_urlNone
authorNone
requires_python>=3.6
licenseBSD
keywords arize evaluations explainability llm monitoring observability tracing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <img src="https://storage.googleapis.com/arize-assets/arize-logo-white.jpg" width="600" /><br><br>
</div>

[![Pypi](https://badge.fury.io/py/arize.svg)](https://badge.fury.io/py/arize)
[![Slack](https://img.shields.io/badge/slack-@arize-yellow.svg?logo=slack)](https://join.slack.com/t/arize-ai/shared_invite/zt-g9c1j1xs-aQEwOAkU4T2x5K8cqI1Xqg)

---

## Overview

A helper package to interact with Arize AI APIs.

Arize is an end-to-end ML & LLM observability and monitoring platform. The platform is designed to help AI & ML engineers and data science practitioners surface and fix issues with ML models in production faster with:

- LLM tracing
- Automated ML monitoring and model monitoring
- Workflows to troubleshoot model performance
- Real-time visualizations for model performance monitoring, data quality monitoring, and drift monitoring
- Model prediction cohort analysis
- Pre-deployment model validation
- Integrated model explainability

---

## Quickstart

This guide will help you instrument your code to log observability data for model monitoring and ML observability. The types of data supported include prediction labels, human readable/debuggable model features and tags, actual labels (once the ground truth is learned), and other model-related data. Logging model data allows you to generate powerful visualizations in the Arize platform to better monitor model performance, understand issues that arise, and debug your model's behavior. Additionally, Arize provides data quality monitoring, data drift detection, and performance management of your production models.

Start logging your model data with the following steps:

### 1. Create your account

Sign up for a free account [HERE](https://app.arize.com/auth/join).

<div align="center">
  <img src="https://storage.googleapis.com/arize-assets/Arize%20UI%20platform.jpg" /><br><br>
</div>

### 2. Get your service API key

When you create an account, we generate a service API key. You will need this API Key and your Space Key for logging authentication.

### 3. Instrument your code

### Python Client

If you are using the Arize python client, add a few lines to your code to log predictions and actuals. Logs are sent to Arize asynchronously.

### Install Library

Install the Arize library in an environment using Python >= 3.6.

```sh
$ pip3 install arize
```

Or clone the repo:

```sh
$ git clone https://github.com/Arize-ai/client_python.git
$ python3 -m pip install client_python/
```

### Initialize Python Client

Initialize the arize client at the start of your service using your previously created API and Space Keys.

> **_NOTE:_** We strongly suggest storing the API key as a secret or an environment variable.

```python
from arize.api import Client
from arize.utils.types import ModelTypes, Environments


API_KEY = os.environ.get('ARIZE_API_KEY') #If passing api_key via env vars

arize_client = Client(space_key='ARIZE_SPACE_KEY', api_key=API_KEY)
```

### Collect your model input features and labels you'd like to track

#### Real-time single prediction:

For a single real-time prediction, you can track all input features used at prediction time by logging them via a key:value dictionary.

```python
features = {
    'state': 'ca',
    'city': 'berkeley',
    'merchant_name': 'Peets Coffee',
    'pos_approved': True,
    'item_count': 10,
    'merchant_type': 'coffee shop',
    'charge_amount': 20.11,
    }
```

#### Bulk predictions:

When dealing with bulk predictions, you can pass in input features, prediction/actual labels, and prediction_ids for more than one prediction via a Pandas Dataframe where df.columns contain feature names.

```python
## e.g. labels from a CSV. Labels must be 2-D data frames where df.columns correspond to the label name
features_df = pd.read_csv('path/to/file.csv')

prediction_labels_df = pd.DataFrame(np.random.randint(1, 100, size=(features.shape[0], 1)))

ids_df = pd.DataFrame([str(uuid.uuid4()) for _ in range(len(prediction_labels.index))])
```

### Log Predictions

#### Single real-time prediction:

```python
## Returns an array of concurrent.futures.Future
pred = arize.log(
    model_id='sample-model-1',
    model_version='v1.23.64',
    model_type=ModelTypes.BINARY,
    prediction_id='plED4eERDCasd9797ca34',
    prediction_label=True,
    features=features,
    )

#### To confirm that the log request completed successfully, await for it to resolve:
## NB: This is a blocking call
response = pred.get()
res = response.result()
if res.status_code != 200:
  print(f'future failed with response code {res.status_code}, {res.text}')
```

#### Bulk upload of predictions:

```python
responses = arize.bulk_log(
    model_id='sample-model-1',
    model_version='v1.23.64',
    model_type=ModelTypes.BINARY,
    prediction_ids=ids_df,
    prediction_labels=prediction_labels_df,
    features=features_df
    )
#### To confirm that the log request completed successfully, await for futures to resolve:
## NB: This is a blocking call
import concurrent.futures as cf
for response in cf.as_completed(responses):
  res = response.result()
  if res.status_code != 200:
    print(f'future failed with response code {res.status_code}, {res.text}')
```

The client's log_prediction/actual function returns a single concurrent future while log_bulk_predictions/actuals returns a list of concurrent futures for asynchronous behavior. To capture the logging response, you can await the resolved futures. If you desire a fire-and-forget pattern, you can disregard the responses altogether.

We automatically discover new models logged over time based on the model ID sent on each prediction.

### Logging Actual Labels

> **_NOTE:_** Notice the prediction_id passed in matches the original prediction sent on the previous example above.

```python
response = arize.log(
    model_id='sample-model-1',
    model_type=ModelTypes.BINARY,
    prediction_id='plED4eERDCasd9797ca34',
    actual_label=False
    )
```

#### Bulk upload of actuals:

```python
responses = arize.bulk_log(
    model_id='sample-model-1',
    model_type=ModelTypes.BINARY,
    prediction_ids=ids_df,
    actual_labels=actual_labels_df,
    )

#### To confirm that the log request completed successfully, await for futures to resolve:
## NB: This is a blocking call
import concurrent.futures as cf
for response in cf.as_completed(responses):
  res = response.result()
  if res.status_code != 200:
    print(f'future failed with response code {res.status_code}, {res.text}')
```

Once the actual labels (ground truth) for your predictions have been determined, you can send them to Arize and evaluate your metrics over time. The prediction id for one prediction links to its corresponding actual label so it's important to note those must be the same when matching events.

### Bulk upload of all your data (features, predictions, actuals, SHAP values) in a pandas.DataFrame

Use arize.pandas.logger to publish a dataframe with the features, predicted label, actual, and/or SHAP to Arize for monitoring, analysis, and explainability.

#### Initialize Arize Client from `arize.pandas.logger`

```python
from arize.pandas.logger import Client, Schema
from arize.utils.types import ModelTypes, Environments

API_KEY = os.environ.get('ARIZE_API_KEY') #If passing api_key via env vars
arize_client = Client(space_key='ARIZE_SPACE_KEY', api_key=API_KEY)
```

#### Logging features & predictions only, then actuals

```python
response = arize_client.log(
    dataframe=your_sample_df,
    model_id="fraud-model",
    model_version="1.0",
    model_type=ModelTypes.SCORE_CATEGORICAL,
    environment=Environments.PRODUCTION,
    schema = Schema(
        prediction_id_column_name="prediction_id",
        timestamp_column_name="prediction_ts",
        prediction_label_column_name="prediction_label",
        prediction_score_column_name="prediction_score",
        feature_column_names=feature_cols,
    )
)

response = arize_client.log(
    dataframe=your_sample_df,
    model_id=model_id,
    model_type=ModelTypes.SCORE_CATEGORICAL,
    environment=Environments.PRODUCTION,
    schema = Schema(
        prediction_id_column_name="prediction_id",
        actual_label_column_name="actual_label",
    )
)
```

#### Logging features, predictions, actuals, and SHAP values together

```python
response = arize_client.log(
    dataframe=your_sample_df,
    model_id="fraud-model",
    model_version="1.0",
    model_type=ModelTypes.NUMERIC,
    environment=Environments.PRODUCTION,
    schema = Schema(
        prediction_id_column_name="prediction_id",
        timestamp_column_name="prediction_ts",
        prediction_label_column_name="prediction_label",
        actual_label_column_name="actual_label",
        feature_column_names=feature_col_name,
        shap_values_column_names=dict(zip(feature_col_name, shap_col_name))
    )
)
```

### 4. Log In for Analytics

That's it! Once your service is deployed and predictions are logged you'll be able to log into your Arize account and dive into your data, slicing it by features, tags, models, time, etc.

#### Analytics Dashboard

<div align="center">
  <img src="https://storage.googleapis.com/arize-assets/Arize%20UI%20platform.jpg" /><br><br>
</div>

---

### Logging SHAP values

Log feature importance in SHAP values to the Arize platform to explain your model's predictions. By logging SHAP values you gain the ability to view the global feature importances of your predictions as well as the ability to perform cohort and prediction based analysis to compare feature importance values under varying conditions. For more information on SHAP and how to use SHAP with Arize, check out our [SHAP documentation](https://docs.arize.com/arize/product-guides/explainability).

---

### Other languages

If you are using a different language, you'll be able to post an HTTP request to our Arize edge-servers to log your events.

### HTTP post request to Arize

```bash
curl -X POST -H "Authorization: YOU_API_KEY" "https://log.arize.com/v1/log" -d'{"space_key": "YOUR_SPACE_KEY", "model_id": "test_model_1", "prediction_id":"test100", "prediction":{"model_version": "v1.23.64", "features":{"state":{"string": "CO"}, "item_count":{"int": 10}, "charge_amt":{"float": 12.34}, "physical_card":{"string": true}}, "prediction_label": {"binary": false}}}'
```

---

### Website

Visit Us At: https://arize.com/model-monitoring/

Official Documentations: https://docs.arize.com/arize/

### Additional Resources

- [What is ML observability?](https://arize.com/what-is-ml-observability/)
- [Playbook to model monitoring in production](https://arize.com/the-playbook-to-monitor-your-models-performance-in-production/)
- [Using statistical distance metrics for ML monitoring and observability](https://arize.com/using-statistical-distance-metrics-for-machine-learning-observability/)
- [ML infrastructure tools for data preparation](https://arize.com/ml-infrastructure-tools-for-data-preparation/)
- [ML infrastructure tools for model building](https://arize.com/ml-infrastructure-tools-for-model-building/)
- [ML infrastructure tools for production](https://arize.com/ml-infrastructure-tools-for-production-part-1/)
- [ML infrastructure tools for model deployment and model serving](https://arize.com/ml-infrastructure-tools-for-production-part-2-model-deployment-and-serving/)
- [ML infrastructure tools for ML monitoring and observability](https://arize.com/ml-infrastructure-tools-ml-observability/)

Visit the [Arize Blog](https://arize.com/blog) and [Resource Center](https://arize.com/resource-hub/) for more resources on ML observability and model monitoring.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "arize",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "Arize AI <support@arize.com>",
    "keywords": "Arize, Evaluations, Explainability, LLM, Monitoring, Observability, Tracing",
    "author": null,
    "author_email": "Arize AI <support@arize.com>",
    "download_url": "https://files.pythonhosted.org/packages/70/f6/e5e7358cf8e34d4edcc76f8b81a3521fa2dd59d9ac9497d9a5f35e5274a2/arize-7.30.0.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <img src=\"https://storage.googleapis.com/arize-assets/arize-logo-white.jpg\" width=\"600\" /><br><br>\n</div>\n\n[![Pypi](https://badge.fury.io/py/arize.svg)](https://badge.fury.io/py/arize)\n[![Slack](https://img.shields.io/badge/slack-@arize-yellow.svg?logo=slack)](https://join.slack.com/t/arize-ai/shared_invite/zt-g9c1j1xs-aQEwOAkU4T2x5K8cqI1Xqg)\n\n---\n\n## Overview\n\nA helper package to interact with Arize AI APIs.\n\nArize is an end-to-end ML & LLM observability and monitoring platform. The platform is designed to help AI & ML engineers and data science practitioners surface and fix issues with ML models in production faster with:\n\n- LLM tracing\n- Automated ML monitoring and model monitoring\n- Workflows to troubleshoot model performance\n- Real-time visualizations for model performance monitoring, data quality monitoring, and drift monitoring\n- Model prediction cohort analysis\n- Pre-deployment model validation\n- Integrated model explainability\n\n---\n\n## Quickstart\n\nThis guide will help you instrument your code to log observability data for model monitoring and ML observability. The types of data supported include prediction labels, human readable/debuggable model features and tags, actual labels (once the ground truth is learned), and other model-related data. Logging model data allows you to generate powerful visualizations in the Arize platform to better monitor model performance, understand issues that arise, and debug your model's behavior. Additionally, Arize provides data quality monitoring, data drift detection, and performance management of your production models.\n\nStart logging your model data with the following steps:\n\n### 1. Create your account\n\nSign up for a free account [HERE](https://app.arize.com/auth/join).\n\n<div align=\"center\">\n  <img src=\"https://storage.googleapis.com/arize-assets/Arize%20UI%20platform.jpg\" /><br><br>\n</div>\n\n### 2. Get your service API key\n\nWhen you create an account, we generate a service API key. You will need this API Key and your Space Key for logging authentication.\n\n### 3. Instrument your code\n\n### Python Client\n\nIf you are using the Arize python client, add a few lines to your code to log predictions and actuals. Logs are sent to Arize asynchronously.\n\n### Install Library\n\nInstall the Arize library in an environment using Python >= 3.6.\n\n```sh\n$ pip3 install arize\n```\n\nOr clone the repo:\n\n```sh\n$ git clone https://github.com/Arize-ai/client_python.git\n$ python3 -m pip install client_python/\n```\n\n### Initialize Python Client\n\nInitialize the arize client at the start of your service using your previously created API and Space Keys.\n\n> **_NOTE:_** We strongly suggest storing the API key as a secret or an environment variable.\n\n```python\nfrom arize.api import Client\nfrom arize.utils.types import ModelTypes, Environments\n\n\nAPI_KEY = os.environ.get('ARIZE_API_KEY') #If passing api_key via env vars\n\narize_client = Client(space_key='ARIZE_SPACE_KEY', api_key=API_KEY)\n```\n\n### Collect your model input features and labels you'd like to track\n\n#### Real-time single prediction:\n\nFor a single real-time prediction, you can track all input features used at prediction time by logging them via a key:value dictionary.\n\n```python\nfeatures = {\n    'state': 'ca',\n    'city': 'berkeley',\n    'merchant_name': 'Peets Coffee',\n    'pos_approved': True,\n    'item_count': 10,\n    'merchant_type': 'coffee shop',\n    'charge_amount': 20.11,\n    }\n```\n\n#### Bulk predictions:\n\nWhen dealing with bulk predictions, you can pass in input features, prediction/actual labels, and prediction_ids for more than one prediction via a Pandas Dataframe where df.columns contain feature names.\n\n```python\n## e.g. labels from a CSV. Labels must be 2-D data frames where df.columns correspond to the label name\nfeatures_df = pd.read_csv('path/to/file.csv')\n\nprediction_labels_df = pd.DataFrame(np.random.randint(1, 100, size=(features.shape[0], 1)))\n\nids_df = pd.DataFrame([str(uuid.uuid4()) for _ in range(len(prediction_labels.index))])\n```\n\n### Log Predictions\n\n#### Single real-time prediction:\n\n```python\n## Returns an array of concurrent.futures.Future\npred = arize.log(\n    model_id='sample-model-1',\n    model_version='v1.23.64',\n    model_type=ModelTypes.BINARY,\n    prediction_id='plED4eERDCasd9797ca34',\n    prediction_label=True,\n    features=features,\n    )\n\n#### To confirm that the log request completed successfully, await for it to resolve:\n## NB: This is a blocking call\nresponse = pred.get()\nres = response.result()\nif res.status_code != 200:\n  print(f'future failed with response code {res.status_code}, {res.text}')\n```\n\n#### Bulk upload of predictions:\n\n```python\nresponses = arize.bulk_log(\n    model_id='sample-model-1',\n    model_version='v1.23.64',\n    model_type=ModelTypes.BINARY,\n    prediction_ids=ids_df,\n    prediction_labels=prediction_labels_df,\n    features=features_df\n    )\n#### To confirm that the log request completed successfully, await for futures to resolve:\n## NB: This is a blocking call\nimport concurrent.futures as cf\nfor response in cf.as_completed(responses):\n  res = response.result()\n  if res.status_code != 200:\n    print(f'future failed with response code {res.status_code}, {res.text}')\n```\n\nThe client's log_prediction/actual function returns a single concurrent future while log_bulk_predictions/actuals returns a list of concurrent futures for asynchronous behavior. To capture the logging response, you can await the resolved futures. If you desire a fire-and-forget pattern, you can disregard the responses altogether.\n\nWe automatically discover new models logged over time based on the model ID sent on each prediction.\n\n### Logging Actual Labels\n\n> **_NOTE:_** Notice the prediction_id passed in matches the original prediction sent on the previous example above.\n\n```python\nresponse = arize.log(\n    model_id='sample-model-1',\n    model_type=ModelTypes.BINARY,\n    prediction_id='plED4eERDCasd9797ca34',\n    actual_label=False\n    )\n```\n\n#### Bulk upload of actuals:\n\n```python\nresponses = arize.bulk_log(\n    model_id='sample-model-1',\n    model_type=ModelTypes.BINARY,\n    prediction_ids=ids_df,\n    actual_labels=actual_labels_df,\n    )\n\n#### To confirm that the log request completed successfully, await for futures to resolve:\n## NB: This is a blocking call\nimport concurrent.futures as cf\nfor response in cf.as_completed(responses):\n  res = response.result()\n  if res.status_code != 200:\n    print(f'future failed with response code {res.status_code}, {res.text}')\n```\n\nOnce the actual labels (ground truth) for your predictions have been determined, you can send them to Arize and evaluate your metrics over time. The prediction id for one prediction links to its corresponding actual label so it's important to note those must be the same when matching events.\n\n### Bulk upload of all your data (features, predictions, actuals, SHAP values) in a pandas.DataFrame\n\nUse arize.pandas.logger to publish a dataframe with the features, predicted label, actual, and/or SHAP to Arize for monitoring, analysis, and explainability.\n\n#### Initialize Arize Client from `arize.pandas.logger`\n\n```python\nfrom arize.pandas.logger import Client, Schema\nfrom arize.utils.types import ModelTypes, Environments\n\nAPI_KEY = os.environ.get('ARIZE_API_KEY') #If passing api_key via env vars\narize_client = Client(space_key='ARIZE_SPACE_KEY', api_key=API_KEY)\n```\n\n#### Logging features & predictions only, then actuals\n\n```python\nresponse = arize_client.log(\n    dataframe=your_sample_df,\n    model_id=\"fraud-model\",\n    model_version=\"1.0\",\n    model_type=ModelTypes.SCORE_CATEGORICAL,\n    environment=Environments.PRODUCTION,\n    schema = Schema(\n        prediction_id_column_name=\"prediction_id\",\n        timestamp_column_name=\"prediction_ts\",\n        prediction_label_column_name=\"prediction_label\",\n        prediction_score_column_name=\"prediction_score\",\n        feature_column_names=feature_cols,\n    )\n)\n\nresponse = arize_client.log(\n    dataframe=your_sample_df,\n    model_id=model_id,\n    model_type=ModelTypes.SCORE_CATEGORICAL,\n    environment=Environments.PRODUCTION,\n    schema = Schema(\n        prediction_id_column_name=\"prediction_id\",\n        actual_label_column_name=\"actual_label\",\n    )\n)\n```\n\n#### Logging features, predictions, actuals, and SHAP values together\n\n```python\nresponse = arize_client.log(\n    dataframe=your_sample_df,\n    model_id=\"fraud-model\",\n    model_version=\"1.0\",\n    model_type=ModelTypes.NUMERIC,\n    environment=Environments.PRODUCTION,\n    schema = Schema(\n        prediction_id_column_name=\"prediction_id\",\n        timestamp_column_name=\"prediction_ts\",\n        prediction_label_column_name=\"prediction_label\",\n        actual_label_column_name=\"actual_label\",\n        feature_column_names=feature_col_name,\n        shap_values_column_names=dict(zip(feature_col_name, shap_col_name))\n    )\n)\n```\n\n### 4. Log In for Analytics\n\nThat's it! Once your service is deployed and predictions are logged you'll be able to log into your Arize account and dive into your data, slicing it by features, tags, models, time, etc.\n\n#### Analytics Dashboard\n\n<div align=\"center\">\n  <img src=\"https://storage.googleapis.com/arize-assets/Arize%20UI%20platform.jpg\" /><br><br>\n</div>\n\n---\n\n### Logging SHAP values\n\nLog feature importance in SHAP values to the Arize platform to explain your model's predictions. By logging SHAP values you gain the ability to view the global feature importances of your predictions as well as the ability to perform cohort and prediction based analysis to compare feature importance values under varying conditions. For more information on SHAP and how to use SHAP with Arize, check out our [SHAP documentation](https://docs.arize.com/arize/product-guides/explainability).\n\n---\n\n### Other languages\n\nIf you are using a different language, you'll be able to post an HTTP request to our Arize edge-servers to log your events.\n\n### HTTP post request to Arize\n\n```bash\ncurl -X POST -H \"Authorization: YOU_API_KEY\" \"https://log.arize.com/v1/log\" -d'{\"space_key\": \"YOUR_SPACE_KEY\", \"model_id\": \"test_model_1\", \"prediction_id\":\"test100\", \"prediction\":{\"model_version\": \"v1.23.64\", \"features\":{\"state\":{\"string\": \"CO\"}, \"item_count\":{\"int\": 10}, \"charge_amt\":{\"float\": 12.34}, \"physical_card\":{\"string\": true}}, \"prediction_label\": {\"binary\": false}}}'\n```\n\n---\n\n### Website\n\nVisit Us At: https://arize.com/model-monitoring/\n\nOfficial Documentations: https://docs.arize.com/arize/\n\n### Additional Resources\n\n- [What is ML observability?](https://arize.com/what-is-ml-observability/)\n- [Playbook to model monitoring in production](https://arize.com/the-playbook-to-monitor-your-models-performance-in-production/)\n- [Using statistical distance metrics for ML monitoring and observability](https://arize.com/using-statistical-distance-metrics-for-machine-learning-observability/)\n- [ML infrastructure tools for data preparation](https://arize.com/ml-infrastructure-tools-for-data-preparation/)\n- [ML infrastructure tools for model building](https://arize.com/ml-infrastructure-tools-for-model-building/)\n- [ML infrastructure tools for production](https://arize.com/ml-infrastructure-tools-for-production-part-1/)\n- [ML infrastructure tools for model deployment and model serving](https://arize.com/ml-infrastructure-tools-for-production-part-2-model-deployment-and-serving/)\n- [ML infrastructure tools for ML monitoring and observability](https://arize.com/ml-infrastructure-tools-ml-observability/)\n\nVisit the [Arize Blog](https://arize.com/blog) and [Resource Center](https://arize.com/resource-hub/) for more resources on ML observability and model monitoring.\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "A helper library to interact with Arize AI APIs",
    "version": "7.30.0",
    "project_urls": {
        "Changelog": "https://github.com/Arize-ai/client_python/blob/main/CHANGELOG.md",
        "Documentation": "https://docs.arize.com/arize",
        "Homepage": "https://arize.com",
        "Issues": "https://github.com/Arize-ai/client_python/issues",
        "Source": "https://github.com/Arize-ai/client_python"
    },
    "split_keywords": [
        "arize",
        " evaluations",
        " explainability",
        " llm",
        " monitoring",
        " observability",
        " tracing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3fc98f9ef27517fcca376856076f3ca1271d540b1d15c335ff05cff705e33aeb",
                "md5": "c1723b938d1712a4c71e2e1bbdfc201f",
                "sha256": "00663900c1677d9938c9db28dad86a45c555c6f9aa58075c62bdda25a7b696c8"
            },
            "downloads": -1,
            "filename": "arize-7.30.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c1723b938d1712a4c71e2e1bbdfc201f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 176716,
            "upload_time": "2024-12-17T17:49:08",
            "upload_time_iso_8601": "2024-12-17T17:49:08.845025Z",
            "url": "https://files.pythonhosted.org/packages/3f/c9/8f9ef27517fcca376856076f3ca1271d540b1d15c335ff05cff705e33aeb/arize-7.30.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "70f6e5e7358cf8e34d4edcc76f8b81a3521fa2dd59d9ac9497d9a5f35e5274a2",
                "md5": "870c12ed1faecd1a7f3d0a92faea8b71",
                "sha256": "e757141dcc099140da8c02a70c21f8db5455637d598bcd0d2e6cde0a7a344be8"
            },
            "downloads": -1,
            "filename": "arize-7.30.0.tar.gz",
            "has_sig": false,
            "md5_digest": "870c12ed1faecd1a7f3d0a92faea8b71",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 141315,
            "upload_time": "2024-12-17T17:49:10",
            "upload_time_iso_8601": "2024-12-17T17:49:10.763722Z",
            "url": "https://files.pythonhosted.org/packages/70/f6/e5e7358cf8e34d4edcc76f8b81a3521fa2dd59d9ac9497d9a5f35e5274a2/arize-7.30.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-17 17:49:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Arize-ai",
    "github_project": "client_python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "arize"
}
        
Elapsed time: 0.47010s