arize-phoenix-evals


Namearize-phoenix-evals JSON
Version 0.8.1 PyPI version JSON
download
home_pageNone
SummaryLLM Evaluations
upload_time2024-05-04 00:18:05
maintainerNone
docs_urlNone
authorNone
requires_python<3.13,>=3.8
licenseElastic-2.0
keywords explainability monitoring observability
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # arize-phoenix-evals

Phoenix provides tooling to evaluate LLM applications, including tools to determine the relevance or irrelevance of documents retrieved by retrieval-augmented generation (RAG) application, whether or not the response is toxic, and much more.

Phoenix's approach to LLM evals is notable for the following reasons:

-   Includes pre-tested templates and convenience functions for a set of common Eval “tasks”
-   Data science rigor applied to the testing of model and template combinations
-   Designed to run as fast as possible on batches of data
-   Includes benchmark datasets and tests for each eval function

## Installation

Install the arize-phoenix sub-package via `pip`

```shell
pip install arize-phoenix-evals
```

Note you will also have to install the LLM vendor SDK you would like to use with LLM Evals. For example, to use OpenAI's GPT-4, you will need to install the OpenAI Python SDK:

```shell
pip install 'openai>=1.0.0'
```

## Usage

Here is an example of running the RAG relevance eval on a dataset of Wikipedia questions and answers:

```python
import os
from phoenix.evals import (
    RAG_RELEVANCY_PROMPT_TEMPLATE,
    RAG_RELEVANCY_PROMPT_RAILS_MAP,
    OpenAIModel,
    download_benchmark_dataset,
    llm_classify,
)
from sklearn.metrics import precision_recall_fscore_support, confusion_matrix, ConfusionMatrixDisplay

os.environ["OPENAI_API_KEY"] = "<your-openai-key>"

# Download the benchmark golden dataset
df = download_benchmark_dataset(
    task="binary-relevance-classification", dataset_name="wiki_qa-train"
)
# Sample and re-name the columns to match the template
df = df.sample(100)
df = df.rename(
    columns={
        "query_text": "input",
        "document_text": "reference",
    },
)
model = OpenAIModel(
    model="gpt-4",
    temperature=0.0,
)


rails =list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values())
df[["eval_relevance"]] = llm_classify(df, model, RAG_RELEVANCY_PROMPT_TEMPLATE, rails)
#Golden dataset has True/False map to -> "irrelevant" / "relevant"
#we can then scikit compare to output of template - same format
y_true = df["relevant"].map({True: "relevant", False: "irrelevant"})
y_pred = df["eval_relevance"]

# Compute Per-Class Precision, Recall, F1 Score, Support
precision, recall, f1, support = precision_recall_fscore_support(y_true, y_pred)
```

To learn more about LLM Evals, see the [LLM Evals documentation](https://docs.arize.com/phoenix/concepts/llm-evals/).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "arize-phoenix-evals",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.8",
    "maintainer_email": null,
    "keywords": "Explainability, Monitoring, Observability",
    "author": null,
    "author_email": "Arize AI <phoenix-devs@arize.com>",
    "download_url": "https://files.pythonhosted.org/packages/30/51/d80a0fbefc05491bda1e966cd0aaf4a72c24066759389b80c0b962d9b97a/arize_phoenix_evals-0.8.1.tar.gz",
    "platform": null,
    "description": "# arize-phoenix-evals\n\nPhoenix provides tooling to evaluate LLM applications, including tools to determine the relevance or irrelevance of documents retrieved by retrieval-augmented generation (RAG) application, whether or not the response is toxic, and much more.\n\nPhoenix's approach to LLM evals is notable for the following reasons:\n\n-   Includes pre-tested templates and convenience functions for a set of common Eval \u201ctasks\u201d\n-   Data science rigor applied to the testing of model and template combinations\n-   Designed to run as fast as possible on batches of data\n-   Includes benchmark datasets and tests for each eval function\n\n## Installation\n\nInstall the arize-phoenix sub-package via `pip`\n\n```shell\npip install arize-phoenix-evals\n```\n\nNote you will also have to install the LLM vendor SDK you would like to use with LLM Evals. For example, to use OpenAI's GPT-4, you will need to install the OpenAI Python SDK:\n\n```shell\npip install 'openai>=1.0.0'\n```\n\n## Usage\n\nHere is an example of running the RAG relevance eval on a dataset of Wikipedia questions and answers:\n\n```python\nimport os\nfrom phoenix.evals import (\n    RAG_RELEVANCY_PROMPT_TEMPLATE,\n    RAG_RELEVANCY_PROMPT_RAILS_MAP,\n    OpenAIModel,\n    download_benchmark_dataset,\n    llm_classify,\n)\nfrom sklearn.metrics import precision_recall_fscore_support, confusion_matrix, ConfusionMatrixDisplay\n\nos.environ[\"OPENAI_API_KEY\"] = \"<your-openai-key>\"\n\n# Download the benchmark golden dataset\ndf = download_benchmark_dataset(\n    task=\"binary-relevance-classification\", dataset_name=\"wiki_qa-train\"\n)\n# Sample and re-name the columns to match the template\ndf = df.sample(100)\ndf = df.rename(\n    columns={\n        \"query_text\": \"input\",\n        \"document_text\": \"reference\",\n    },\n)\nmodel = OpenAIModel(\n    model=\"gpt-4\",\n    temperature=0.0,\n)\n\n\nrails =list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values())\ndf[[\"eval_relevance\"]] = llm_classify(df, model, RAG_RELEVANCY_PROMPT_TEMPLATE, rails)\n#Golden dataset has True/False map to -> \"irrelevant\" / \"relevant\"\n#we can then scikit compare to output of template - same format\ny_true = df[\"relevant\"].map({True: \"relevant\", False: \"irrelevant\"})\ny_pred = df[\"eval_relevance\"]\n\n# Compute Per-Class Precision, Recall, F1 Score, Support\nprecision, recall, f1, support = precision_recall_fscore_support(y_true, y_pred)\n```\n\nTo learn more about LLM Evals, see the [LLM Evals documentation](https://docs.arize.com/phoenix/concepts/llm-evals/).\n",
    "bugtrack_url": null,
    "license": "Elastic-2.0",
    "summary": "LLM Evaluations",
    "version": "0.8.1",
    "project_urls": {
        "Documentation": "https://docs.arize.com/phoenix/",
        "Issues": "https://github.com/Arize-ai/phoenix/issues",
        "Source": "https://github.com/Arize-ai/phoenix"
    },
    "split_keywords": [
        "explainability",
        " monitoring",
        " observability"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ad5fa0e0d86cb6221fffd95aa15601ab64417eef22df0c2124b5700eae60478b",
                "md5": "630a7902c42385c4a2982f972ce4b7cf",
                "sha256": "8963e8fef12f2912944bd16a58e880d72790ca26008e4c275158102f0b0e6582"
            },
            "downloads": -1,
            "filename": "arize_phoenix_evals-0.8.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "630a7902c42385c4a2982f972ce4b7cf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.8",
            "size": 47393,
            "upload_time": "2024-05-04T00:18:03",
            "upload_time_iso_8601": "2024-05-04T00:18:03.419326Z",
            "url": "https://files.pythonhosted.org/packages/ad/5f/a0e0d86cb6221fffd95aa15601ab64417eef22df0c2124b5700eae60478b/arize_phoenix_evals-0.8.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3051d80a0fbefc05491bda1e966cd0aaf4a72c24066759389b80c0b962d9b97a",
                "md5": "f8a541406a54ab010028c1a2a3d3ba9b",
                "sha256": "c71d8f789c1e439f9e0ce5c9fe28f013feba886096e157f78a5e1085b2dd42e6"
            },
            "downloads": -1,
            "filename": "arize_phoenix_evals-0.8.1.tar.gz",
            "has_sig": false,
            "md5_digest": "f8a541406a54ab010028c1a2a3d3ba9b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.8",
            "size": 35669,
            "upload_time": "2024-05-04T00:18:05",
            "upload_time_iso_8601": "2024-05-04T00:18:05.639108Z",
            "url": "https://files.pythonhosted.org/packages/30/51/d80a0fbefc05491bda1e966cd0aaf4a72c24066759389b80c0b962d9b97a/arize_phoenix_evals-0.8.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-04 00:18:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Arize-ai",
    "github_project": "phoenix",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "arize-phoenix-evals"
}
        
Elapsed time: 0.34742s