ragmetrics-pkg


Nameragmetrics-pkg JSON
Version 0.1.4 PyPI version JSON
download
home_pagehttps://ragmetrics.ai
SummaryA package for integrating RagMetrics with LLM calls
upload_time2025-01-03 18:51:50
maintainerNone
docs_urlNone
authorragmetrics
requires_python>=3.6
licenseNone
keywords
VCS
bugtrack_url
requirements requests
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">
        <img src="https://cdn.simpleicons.org/python/fff/fff" alt="Python" width=24 height=24> 
Ragmetrics
    </h1>
    <p align="center">
        <p align="center">Call all LLM APIs using the OpenAI format.
        <br>
    </p>
<h4 align="center">
    <a href="https://pypi.org/project/ragmetrics-pkg/" target="_blank">
        <img src="https://img.shields.io/pypi/v/ragmetrics-pkg.svg" alt="PyPI Version">
    </a>
</h4>

Ragmetrics manages:

- Translate inputs to provider's `completion` endpoints
- Consistent `Traces` logged on [Ragmetrics portal](https://ragmetrics.ai/)
- Realtime Monitoring and AB Test Evaluations

Support for more providers. Contact our team to get it done.

# Usage Docs

```shell
pip install ragmetrics-pkg
```

#### Portal Login

```python
import ragmetrics

## login to ragmetrics account via portal key
ragmetrics.login(key="febfe*****************", off=False)
```

Here off flag helps developer to toggle traces.\
`off -> True` (will turn off the traces and vice versa)

#### Monitoring/AB Testing/Evaluation (based on compare models)

```python
import ragmetrics

## starts monitoring all LLM calls from this client
ragmetrics.monitor(client, context={"user": True})
```

Context will help to feed as dataset over Ragmetrics Tasks for Evaluation

### Code Utilisation

```python
from openai import OpenAI
import ragmetrics

client = OpenAI()

# Start monitoring all LLM calls from this client
ragmetrics.monitor(client, context={"user": True})

# Use regular OpenAI API calls with an extra optional metadata parameter
chat_completion = client.chat.completions.create(
  model="gpt-4o", 
  messages=[{"role": "user", "content": "Hello Ragmetrics"}],
  metadata={"pipelineStep":"generation", "property1":"Accuracy and Clarity"},
  comparison_model="gpt-4-turbo"
)
```

- Here `model` is to refer the response generation model `gpt-4o` in our case.
- `Message` with content for the actual developer's request to AI model.
- `Metadata` is to  drive the experiments as defined below:
  - `pipelineStep` for selecting the evaluation step as per Ragmetrics platform experiments.
  - `property` to add multiple criteria needs to be performed under evaluation (will be added with keys property1, 
    property2... etc.)

# Code Examples

#### Example 1

```python
from openai import OpenAI
import ragmetrics

ragmetrics.login(key="febfe*****************", off=False)

client = OpenAI()

ragmetrics.monitor(client, context={"user": True})

chat_completion = client.chat.completions.create(
  model="gpt-4o", 
  messages=[{"role": "user", "content": "Hello World !!!"}],
  metadata={"pipelineStep":"generation", "property1":"Accuracy and Clarity"},
  comparison_model="gpt-4-turbo"
  )
```

#### Example 2

```python
from openai import OpenAI
import ragmetrics

ragmetrics.login(key="febfe*****************", off=False)

client = OpenAI()

ragmetrics.monitor(client, context={"user": True})

chat_completion = client.chat.completions.create(
  model="gpt-3.5-turbo", 
  messages=[{"role": "user", "content": "How's the day today?"}],
  metadata={"pipelineStep":"generation", "property1":"Accuracy and Clarity", "property2":"Buddy-Friendly Tone"},
  comparison_model="gpt-4o"
  )
```

## Portal Keys UI on Ragmetrics Portal

![portal_keys.png](portal_keys.png)

# Why did we build this

- **Need for simplicity**: We need to provide developers a hassle-free solution to interact and monitor their 
  interaction with LLM models like Azure, OpenAI, LiteLLM ... etc.

# Contributors

<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->

<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->

<!-- ALL-CONTRIBUTORS-LIST:END -->

<a href="https://github.com/RagMetrics/ragmetrics-package/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=RagMetrics/ragmetrics-package" />
</a>



            

Raw data

            {
    "_id": null,
    "home_page": "https://ragmetrics.ai",
    "name": "ragmetrics-pkg",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "ragmetrics",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/61/43/36f9e3418eae98e02eccd460b8e713b4dcb5aff4541e6061747e06f688d1/ragmetrics-pkg-0.1.4.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">\n        <img src=\"https://cdn.simpleicons.org/python/fff/fff\" alt=\"Python\" width=24 height=24> \nRagmetrics\n    </h1>\n    <p align=\"center\">\n        <p align=\"center\">Call all LLM APIs using the OpenAI format.\n        <br>\n    </p>\n<h4 align=\"center\">\n    <a href=\"https://pypi.org/project/ragmetrics-pkg/\" target=\"_blank\">\n        <img src=\"https://img.shields.io/pypi/v/ragmetrics-pkg.svg\" alt=\"PyPI Version\">\n    </a>\n</h4>\n\nRagmetrics manages:\n\n- Translate inputs to provider's `completion` endpoints\n- Consistent `Traces` logged on [Ragmetrics portal](https://ragmetrics.ai/)\n- Realtime Monitoring and AB Test Evaluations\n\nSupport for more providers. Contact our team to get it done.\n\n# Usage Docs\n\n```shell\npip install ragmetrics-pkg\n```\n\n#### Portal Login\n\n```python\nimport ragmetrics\n\n## login to ragmetrics account via portal key\nragmetrics.login(key=\"febfe*****************\", off=False)\n```\n\nHere off flag helps developer to toggle traces.\\\n`off -> True` (will turn off the traces and vice versa)\n\n#### Monitoring/AB Testing/Evaluation (based on compare models)\n\n```python\nimport ragmetrics\n\n## starts monitoring all LLM calls from this client\nragmetrics.monitor(client, context={\"user\": True})\n```\n\nContext will help to feed as dataset over Ragmetrics Tasks for Evaluation\n\n### Code Utilisation\n\n```python\nfrom openai import OpenAI\nimport ragmetrics\n\nclient = OpenAI()\n\n# Start monitoring all LLM calls from this client\nragmetrics.monitor(client, context={\"user\": True})\n\n# Use regular OpenAI API calls with an extra optional metadata parameter\nchat_completion = client.chat.completions.create(\n  model=\"gpt-4o\", \n  messages=[{\"role\": \"user\", \"content\": \"Hello Ragmetrics\"}],\n  metadata={\"pipelineStep\":\"generation\", \"property1\":\"Accuracy and Clarity\"},\n  comparison_model=\"gpt-4-turbo\"\n)\n```\n\n- Here `model` is to refer the response generation model `gpt-4o` in our case.\n- `Message` with content for the actual developer's request to AI model.\n- `Metadata` is to  drive the experiments as defined below:\n  - `pipelineStep` for selecting the evaluation step as per Ragmetrics platform experiments.\n  - `property` to add multiple criteria needs to be performed under evaluation (will be added with keys property1, \n    property2... etc.)\n\n# Code Examples\n\n#### Example 1\n\n```python\nfrom openai import OpenAI\nimport ragmetrics\n\nragmetrics.login(key=\"febfe*****************\", off=False)\n\nclient = OpenAI()\n\nragmetrics.monitor(client, context={\"user\": True})\n\nchat_completion = client.chat.completions.create(\n  model=\"gpt-4o\", \n  messages=[{\"role\": \"user\", \"content\": \"Hello World !!!\"}],\n  metadata={\"pipelineStep\":\"generation\", \"property1\":\"Accuracy and Clarity\"},\n  comparison_model=\"gpt-4-turbo\"\n  )\n```\n\n#### Example 2\n\n```python\nfrom openai import OpenAI\nimport ragmetrics\n\nragmetrics.login(key=\"febfe*****************\", off=False)\n\nclient = OpenAI()\n\nragmetrics.monitor(client, context={\"user\": True})\n\nchat_completion = client.chat.completions.create(\n  model=\"gpt-3.5-turbo\", \n  messages=[{\"role\": \"user\", \"content\": \"How's the day today?\"}],\n  metadata={\"pipelineStep\":\"generation\", \"property1\":\"Accuracy and Clarity\", \"property2\":\"Buddy-Friendly Tone\"},\n  comparison_model=\"gpt-4o\"\n  )\n```\n\n## Portal Keys UI on Ragmetrics Portal\n\n![portal_keys.png](portal_keys.png)\n\n# Why did we build this\n\n- **Need for simplicity**: We need to provide developers a hassle-free solution to interact and monitor their \n  interaction with LLM models like Azure, OpenAI, LiteLLM ... etc.\n\n# Contributors\n\n<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->\n<!-- prettier-ignore-start -->\n<!-- markdownlint-disable -->\n\n<!-- markdownlint-restore -->\n<!-- prettier-ignore-end -->\n\n<!-- ALL-CONTRIBUTORS-LIST:END -->\n\n<a href=\"https://github.com/RagMetrics/ragmetrics-package/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=RagMetrics/ragmetrics-package\" />\n</a>\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A package for integrating RagMetrics with LLM calls",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://ragmetrics.ai",
        "Repository": "https://github.com/RagMetrics/ragmetrics-package"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1b3485c319038462569d78942bce4ad07361f47141ab89e7fc344957158cbe43",
                "md5": "4318cad8cc7065f79497834030c8122b",
                "sha256": "07ba2d640db4c7b25925114f946e9d971846b12c632db320c142c41ba119f3f7"
            },
            "downloads": -1,
            "filename": "ragmetrics_pkg-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4318cad8cc7065f79497834030c8122b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 5323,
            "upload_time": "2025-01-03T18:51:47",
            "upload_time_iso_8601": "2025-01-03T18:51:47.416606Z",
            "url": "https://files.pythonhosted.org/packages/1b/34/85c319038462569d78942bce4ad07361f47141ab89e7fc344957158cbe43/ragmetrics_pkg-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "614336f9e3418eae98e02eccd460b8e713b4dcb5aff4541e6061747e06f688d1",
                "md5": "a7e709d7a8bdf3bc44ada604256cc757",
                "sha256": "411c64eba977f47c17016151239ebbce7b749624a7cd2e898d27ba1106790d27"
            },
            "downloads": -1,
            "filename": "ragmetrics-pkg-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "a7e709d7a8bdf3bc44ada604256cc757",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 4955,
            "upload_time": "2025-01-03T18:51:50",
            "upload_time_iso_8601": "2025-01-03T18:51:50.092849Z",
            "url": "https://files.pythonhosted.org/packages/61/43/36f9e3418eae98e02eccd460b8e713b4dcb5aff4541e6061747e06f688d1/ragmetrics-pkg-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-03 18:51:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "RagMetrics",
    "github_project": "ragmetrics-package",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "requests",
            "specs": []
        }
    ],
    "lcname": "ragmetrics-pkg"
}
        
Elapsed time: 1.25321s