# quotientai
[](https://pypi.org/project/quotientai)
## Overview
`quotientai` is an SDK and CLI built to manage artifacts (prompts, datasets), and run evaluations on [Quotient](https://quotientai.co).
## Installation
```console
pip install quotientai
```
## Usage
Create an API key on Quotient and set it as an environment variable called `QUOTIENT_API_KEY`. Then follow the examples below or see our [docs](https://docs.quotientai.co) for a more comprehensive walkthrough.
### Examples
**Create a prompt:**
```python
from quotientai import QuotientAI
quotient = QuotientAI()
new_prompt = quotient.prompts.create(
name="customer-support-inquiry"
system_prompt="You are a helpful assistant.",
user_prompt="How can I assist you today?"
)
print(new_prompt)
```
**Create a dataset:**
```python
from quotientai import QuotientAI
quotient = QuotientAI()
new_dataset = quotient.datasets.create(
name="my-sample-dataset"
description="My first dataset",
rows=[
{"input": "Sample input", "expected": "Sample output"},
{"input": "Another input", "expected": "Another output"}
]
)
print(new_dataset)
```
**Create a log with hallucination detection:**
Log an event with hallucination detection. This will create a log event in Quotient and perform hallucination detection on the model output, input, and documents. This is a fire and forget operation, so it will not block the execution of your code.
Additional examples can be found in the [examples](examples) directory.
```python
from quotientai import QuotientAI
quotient = QuotientAI()
quotient_logger = quotient.logger.init(
# Required
app_name="my-app",
environment="dev",
# dynamic labels for slicing/dicing analytics e.g. by customer, feature, etc
tags={"model": "gpt-4o", "feature": "customer-support"},
hallucination_detection=True,
inconsistency_detection=True,
)
# Mock retrieved documents
retrieved_documents = [{"page_content": "Sample document"}]
response = quotient_logger.log(
user_query="Sample input",
model_output="Sample output",
# Page content from Documents from your retriever used to generate the model output
documents=[doc["page_content"] for doc in retrieved_documents],
# Message history from your chat history
message_history=[
{"role": "system", "content": "You are an expert on geography."},
{"role": "user", "content": "What is the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris"},
],
# Instructions for the model to follow
instructions=[
"You are a helpful assistant that answers questions about the world.",
"Answer the question in a concise manner. If you are not sure, say 'I don't know'.",
],
# Tags can be overridden at log time
tags={"model": "gpt-4o-mini", "feature": "customer-support"},
)
print(response)
```
You can also use the async client if you need to create logs asynchronously.
```python
from quotientai import AsyncQuotientAI
import asyncio
quotient = AsyncQuotientAI()
quotient_logger = quotient.logger.init(
# Required
app_name="my-app",
environment="dev",
# dynamic labels for slicing/dicing analytics e.g. by customer, feature, etc
tags={"model": "gpt-4o", "feature": "customer-support"},
hallucination_detection=True,
inconsistency_detection=True,
)
async def main():
# Mock retrieved documents
retrieved_documents = [{"page_content": "Sample document"}]
response = await quotient_logger.log(
user_query="Sample input",
model_output="Sample output",
# Page content from Documents from your retriever used to generate the model output
documents=[doc["page_content"] for doc in retrieved_documents],
# Message history from your chat history
message_history=[
{"role": "system", "content": "You are an expert on geography."},
{"role": "user", "content": "What is the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris"},
],
# Instructions for the model to follow
instructions=[
"You are a helpful assistant that answers questions about the world.",
"Answer the question in a concise manner. If you are not sure, say 'I don't know'.",
],
# Tags can be overridden at log time
tags={"model": "gpt-4o-mini", "feature": "customer-support"},
)
print(response)
# Run the async function
asyncio.run(main())
```
Raw data
{
"_id": null,
"home_page": null,
"name": "quotientai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "quotient, evaluation, llms, machine learning, ai",
"author": "Freddie Vargus",
"author_email": "freddie@quotientai.co",
"download_url": "https://files.pythonhosted.org/packages/12/b0/2d9d8bf7aa9d74867b260141997e46acfe68091de00858e3c4f061a7a222/quotientai-0.1.9.tar.gz",
"platform": null,
"description": "# quotientai\n[](https://pypi.org/project/quotientai)\n\n## Overview\n\n`quotientai` is an SDK and CLI built to manage artifacts (prompts, datasets), and run evaluations on [Quotient](https://quotientai.co).\n\n## Installation\n\n```console\npip install quotientai\n```\n\n## Usage\n\nCreate an API key on Quotient and set it as an environment variable called `QUOTIENT_API_KEY`. Then follow the examples below or see our [docs](https://docs.quotientai.co) for a more comprehensive walkthrough.\n\n### Examples\n\n**Create a prompt:**\n\n```python\nfrom quotientai import QuotientAI\n\nquotient = QuotientAI()\n\nnew_prompt = quotient.prompts.create(\n name=\"customer-support-inquiry\"\n system_prompt=\"You are a helpful assistant.\",\n user_prompt=\"How can I assist you today?\"\n)\n\nprint(new_prompt)\n```\n\n**Create a dataset:**\n\n```python\nfrom quotientai import QuotientAI\n\nquotient = QuotientAI()\n\nnew_dataset = quotient.datasets.create(\n name=\"my-sample-dataset\"\n description=\"My first dataset\",\n rows=[\n {\"input\": \"Sample input\", \"expected\": \"Sample output\"},\n {\"input\": \"Another input\", \"expected\": \"Another output\"}\n ]\n)\n\nprint(new_dataset)\n```\n\n**Create a log with hallucination detection:**\nLog an event with hallucination detection. This will create a log event in Quotient and perform hallucination detection on the model output, input, and documents. This is a fire and forget operation, so it will not block the execution of your code.\n\nAdditional examples can be found in the [examples](examples) directory.\n\n```python\nfrom quotientai import QuotientAI\n\nquotient = QuotientAI()\nquotient_logger = quotient.logger.init(\n # Required\n app_name=\"my-app\",\n environment=\"dev\",\n # dynamic labels for slicing/dicing analytics e.g. by customer, feature, etc\n tags={\"model\": \"gpt-4o\", \"feature\": \"customer-support\"},\n hallucination_detection=True,\n inconsistency_detection=True,\n)\n\n# Mock retrieved documents\nretrieved_documents = [{\"page_content\": \"Sample document\"}]\n\nresponse = quotient_logger.log(\n user_query=\"Sample input\",\n model_output=\"Sample output\",\n # Page content from Documents from your retriever used to generate the model output\n documents=[doc[\"page_content\"] for doc in retrieved_documents],\n # Message history from your chat history\n message_history=[\n {\"role\": \"system\", \"content\": \"You are an expert on geography.\"},\n {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n {\"role\": \"assistant\", \"content\": \"The capital of France is Paris\"},\n ],\n # Instructions for the model to follow\n instructions=[\n \"You are a helpful assistant that answers questions about the world.\",\n \"Answer the question in a concise manner. If you are not sure, say 'I don't know'.\",\n ],\n # Tags can be overridden at log time\n tags={\"model\": \"gpt-4o-mini\", \"feature\": \"customer-support\"},\n)\n\nprint(response)\n```\n\nYou can also use the async client if you need to create logs asynchronously.\n\n```python\nfrom quotientai import AsyncQuotientAI\nimport asyncio\n\nquotient = AsyncQuotientAI()\n\nquotient_logger = quotient.logger.init(\n # Required\n app_name=\"my-app\",\n environment=\"dev\",\n # dynamic labels for slicing/dicing analytics e.g. by customer, feature, etc\n tags={\"model\": \"gpt-4o\", \"feature\": \"customer-support\"},\n hallucination_detection=True,\n inconsistency_detection=True,\n)\n\n\nasync def main():\n # Mock retrieved documents\n retrieved_documents = [{\"page_content\": \"Sample document\"}]\n\n response = await quotient_logger.log(\n user_query=\"Sample input\",\n model_output=\"Sample output\",\n # Page content from Documents from your retriever used to generate the model output\n documents=[doc[\"page_content\"] for doc in retrieved_documents],\n # Message history from your chat history\n message_history=[\n {\"role\": \"system\", \"content\": \"You are an expert on geography.\"},\n {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n {\"role\": \"assistant\", \"content\": \"The capital of France is Paris\"},\n ],\n # Instructions for the model to follow\n instructions=[\n \"You are a helpful assistant that answers questions about the world.\",\n \"Answer the question in a concise manner. If you are not sure, say 'I don't know'.\",\n ],\n # Tags can be overridden at log time\n tags={\"model\": \"gpt-4o-mini\", \"feature\": \"customer-support\"},\n )\n\n print(response)\n\n\n# Run the async function\nasyncio.run(main())\n```\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "CLI for evaluating large language models with Quotient",
"version": "0.1.9",
"project_urls": null,
"split_keywords": [
"quotient",
" evaluation",
" llms",
" machine learning",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9b5404f5e052886718a6e5301c86700d717fceaf36bb5485ee10fc161c880b7a",
"md5": "fbbe029622dcd89523e8a4bd189a40fb",
"sha256": "5bebdef8eaa18738b2f5eb583ec9b71fb0df93321b0e1fa3eb6f5fd423dcc13f"
},
"downloads": -1,
"filename": "quotientai-0.1.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "fbbe029622dcd89523e8a4bd189a40fb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 27024,
"upload_time": "2025-02-25T18:40:19",
"upload_time_iso_8601": "2025-02-25T18:40:19.867473Z",
"url": "https://files.pythonhosted.org/packages/9b/54/04f5e052886718a6e5301c86700d717fceaf36bb5485ee10fc161c880b7a/quotientai-0.1.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "12b02d9d8bf7aa9d74867b260141997e46acfe68091de00858e3c4f061a7a222",
"md5": "2457109678869dd2d02a736be6f8f4e5",
"sha256": "29fe723cbf4145e0d0eb62ab6f71979e78dfb53ee2e44ccf4fce65d561a311de"
},
"downloads": -1,
"filename": "quotientai-0.1.9.tar.gz",
"has_sig": false,
"md5_digest": "2457109678869dd2d02a736be6f8f4e5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 21058,
"upload_time": "2025-02-25T18:40:21",
"upload_time_iso_8601": "2025-02-25T18:40:21.063674Z",
"url": "https://files.pythonhosted.org/packages/12/b0/2d9d8bf7aa9d74867b260141997e46acfe68091de00858e3c4f061a7a222/quotientai-0.1.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-25 18:40:21",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "quotientai"
}