inspeqai


Nameinspeqai JSON
Version 1.0.30 PyPI version JSON
download
home_pageNone
SummaryInspeq AI Python SDK
upload_time2024-09-25 15:27:32
maintainerNone
docs_urlNone
authorInspeq
requires_python>=3.10
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements certifi charset-normalizer idna python-dotenv requests urllib3
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Inspeq Python SDK

- **Website:** [Inspeq.ai](https://www.inspeq.ai)
- **Inspeq App:** [Inspeq App](https://platform.inspeq.ai)
- **Detailed Documentation:** [Inspeq Documentation](https://docs.inspeq.ai)

## Quickstart Guide

### Installation

Install the Inspeq SDK and python-dotenv using pip:

```bash
pip install inspeqai python-dotenv
```

The `python-dotenv` package is recommended for securely managing your environment variables, such as API keys.

### Obtain SDK API Key and Project Key

Get your API key and Project Key from the [Inspeq App](https://platform.inspeq.ai)

### Usage

Here's a basic example of how to use the Inspeq SDK with environment variables:

```python
import os
from dotenv import load_dotenv
from inspeq.client import InspeqEval

# Load environment variables
load_dotenv()

# Initialize the client
INSPEQ_API_KEY = os.getenv("INSPEQ_API_KEY")
INSPEQ_PROJECT_ID = os.getenv("INSPEQ_PROJECT_ID")
INSPEQ_API_URL = os.getenv("INSPEQ_API_URL")  # Required only for our on-prem customers

inspeq_eval = InspeqEval(inspeq_api_key=INSPEQ_API_KEY, inspeq_project_id=INSPEQ_PROJECT_ID)

# Prepare input data
input_data = [{
    "prompt": "What is the capital of France?",
    "response": "Paris is the capital of France.",
    "context": "The user is asking about European capitals."
}]

# Define metrics to evaluate
metrics_list = ["RESPONSE_TONE", "FACTUAL_CONSISTENCY", "ANSWER_RELEVANCE"]

try:
    results = inspeq_eval.evaluate_llm_task(
        metrics_list=metrics_list,
        input_data=input_data,
        task_name="capital_question"
    )
    print(results)
except Exception as e:
    print(f"An error occurred: {str(e)}")
```

Make sure to create a `.env` file in your project root with your Inspeq credentials:

```
INSPEQ_API_KEY=your_inspeq_sdk_key
INSPEQ_PROJECT_ID=your_project_id
INSPEQ_API_URL=your_inspeq_backend_url
```

### Available Metrics 

```python
metrics_list = [
    "RESPONSE_TONE",
    "ANSWER_RELEVANCE",
    "FACTUAL_CONSISTENCY",
    "CONCEPTUAL_SIMILARITY",
    "READABILITY",
    "COHERENCE",
    "CLARITY",
    "DIVERSITY",
    "CREATIVITY",
    "NARRATIVE_CONTINUITY",
    "GRAMMATICAL_CORRECTNESS",
    "DATA_LEAKAGE",
    "COMPRESSION_SCORE",
    "FUZZY_SCORE",
    "ROUGE_SCORE",
    "BLEU_SCORE",
    "METEOR_SCORE",
    "COSINE_SIMILARITY_SCORE",
    "INSECURE_OUTPUT",
    "INVISIBLE_TEXT",
    "TOXICITY",
    "PROMPT_INJECTION"
]
```

## Features

The Inspeq SDK provides a range of metrics to evaluate language model outputs:

## Response Tone
Assesses the tone and style of the generated response.

## Answer Relevance
Measures the degree to which the generated content directly addresses and pertains to the specific question or prompt provided by the user.

## Factual Consistency
Measures the extent of the model hallucinating i.e. model is making up a response based on its imagination or response is grounded in the context supplied.

## Conceptual Similarity
Measures the extent to which the model response aligns with and reflects the underlying ideas or concepts present in the provided context or prompt.

## Readability
Assesses whether the model response can be read and understood by the intended audience, taking into account factors such as vocabulary complexity, sentence structure, and overall clarity.

## Coherence
Evaluates how well the model generates coherent and logical responses that align with the context of the question.

## Clarity
Assesses the response's clarity in terms of language and structure, based on grammar, readability, concise sentences and words, and less redundancy or diversity.

## Diversity
Assesses the diversity of vocabulary used in a piece of text.

## Creativity
Assesses the ability of the model to generate imaginative, and novel responses that extend beyond standard or expected answers.

## Narrative Continuity
Measures the consistency and logical flow of the response throughout the generated text, ensuring that the progression of events remains coherent and connected.

## Grammatical Correctness
Checks whether the model response adherence to the rules of syntax, is free from errors and follows the conventions of the target language.

## Prompt Injection
Evaluates the susceptibility of language models or AI systems to adversarial prompts that manipulate or alter the system's intended behavior.

## Data Leakage
Measures the extent to which sensitive or unintended information is exposed during model training or inference.

## Insecure Output
Detects whether the response contains insecure or dangerous code patterns that could lead to potential security vulnerabilities.

## Invisible Text
Evaluates if the input contains invisible or non-printable characters that might be used maliciously to hide information or manipulate the model's behavior.

## Toxicity
Evaluates the level of harmful or toxic language present in a given text.

## BLEU Score
Measures the quality of text generated by models by comparing it to one or more reference texts.

## Compression Score
Measures the ratio of the length of the generated summary to the length of the original text.

## Cosine Similarity Score
Measures the similarity between the original text and the generated summary by treating both as vectors in a multi-dimensional space.

## Fuzzy Score
Measures the similarity between two pieces of text based on approximate matching rather than exact matching.

## METEOR Score
Evaluates the quality of generated summaries by comparing them to reference summaries, considering matches at the level of unigrams and accounting for synonyms and stemming.

## ROUGE Score
A set of metrics used to evaluate the quality of generated summaries by comparing them to one or more reference summaries.


## Advanced Usage

### Custom Configurations

You can provide custom configurations for metrics:

```python
metrics_config = {
    "response_tone_config": {
        "threshold": 0.5,
        "custom_labels": ["Negative", "Neutral", "Positive"],
        "label_thresholds": [0, 0.5, 0.7, 1]
    }
}

results = inspeq_eval.evaluate_llm_task(
    metrics_list=["RESPONSE_TONE"],
    input_data=input_data,
    task_name="custom_config_task",
    metrics_config=metrics_config
)
```

## Error Handling

The SDK uses custom exceptions for different types of errors:

- **APIError:** For API related issues
- **ConfigError:** For invalid config related issues
- **InputError:** For invalid input data

## Additional Resources

For detailed API documentation, visit [Inspeq Documentation](https://docs.inspeq.ai).
For support or questions, contact our support team through the Inspeq App.

## License

This SDK is distributed under the terms of the Apache License 2.0.



            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "inspeqai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Inspeq",
    "author_email": "support@inspeq.ai",
    "download_url": "https://files.pythonhosted.org/packages/bc/11/3004e0f615833abf883d622847218db6a201744da5cdb930899b4e2857eb/inspeqai-1.0.30.tar.gz",
    "platform": null,
    "description": "# Inspeq Python SDK\n\n- **Website:** [Inspeq.ai](https://www.inspeq.ai)\n- **Inspeq App:** [Inspeq App](https://platform.inspeq.ai)\n- **Detailed Documentation:** [Inspeq Documentation](https://docs.inspeq.ai)\n\n## Quickstart Guide\n\n### Installation\n\nInstall the Inspeq SDK and python-dotenv using pip:\n\n```bash\npip install inspeqai python-dotenv\n```\n\nThe `python-dotenv` package is recommended for securely managing your environment variables, such as API keys.\n\n### Obtain SDK API Key and Project Key\n\nGet your API key and Project Key from the [Inspeq App](https://platform.inspeq.ai)\n\n### Usage\n\nHere's a basic example of how to use the Inspeq SDK with environment variables:\n\n```python\nimport os\nfrom dotenv import load_dotenv\nfrom inspeq.client import InspeqEval\n\n# Load environment variables\nload_dotenv()\n\n# Initialize the client\nINSPEQ_API_KEY = os.getenv(\"INSPEQ_API_KEY\")\nINSPEQ_PROJECT_ID = os.getenv(\"INSPEQ_PROJECT_ID\")\nINSPEQ_API_URL = os.getenv(\"INSPEQ_API_URL\")  # Required only for our on-prem customers\n\ninspeq_eval = InspeqEval(inspeq_api_key=INSPEQ_API_KEY, inspeq_project_id=INSPEQ_PROJECT_ID)\n\n# Prepare input data\ninput_data = [{\n    \"prompt\": \"What is the capital of France?\",\n    \"response\": \"Paris is the capital of France.\",\n    \"context\": \"The user is asking about European capitals.\"\n}]\n\n# Define metrics to evaluate\nmetrics_list = [\"RESPONSE_TONE\", \"FACTUAL_CONSISTENCY\", \"ANSWER_RELEVANCE\"]\n\ntry:\n    results = inspeq_eval.evaluate_llm_task(\n        metrics_list=metrics_list,\n        input_data=input_data,\n        task_name=\"capital_question\"\n    )\n    print(results)\nexcept Exception as e:\n    print(f\"An error occurred: {str(e)}\")\n```\n\nMake sure to create a `.env` file in your project root with your Inspeq credentials:\n\n```\nINSPEQ_API_KEY=your_inspeq_sdk_key\nINSPEQ_PROJECT_ID=your_project_id\nINSPEQ_API_URL=your_inspeq_backend_url\n```\n\n### Available Metrics \n\n```python\nmetrics_list = [\n    \"RESPONSE_TONE\",\n    \"ANSWER_RELEVANCE\",\n    \"FACTUAL_CONSISTENCY\",\n    \"CONCEPTUAL_SIMILARITY\",\n    \"READABILITY\",\n    \"COHERENCE\",\n    \"CLARITY\",\n    \"DIVERSITY\",\n    \"CREATIVITY\",\n    \"NARRATIVE_CONTINUITY\",\n    \"GRAMMATICAL_CORRECTNESS\",\n    \"DATA_LEAKAGE\",\n    \"COMPRESSION_SCORE\",\n    \"FUZZY_SCORE\",\n    \"ROUGE_SCORE\",\n    \"BLEU_SCORE\",\n    \"METEOR_SCORE\",\n    \"COSINE_SIMILARITY_SCORE\",\n    \"INSECURE_OUTPUT\",\n    \"INVISIBLE_TEXT\",\n    \"TOXICITY\",\n    \"PROMPT_INJECTION\"\n]\n```\n\n## Features\n\nThe Inspeq SDK provides a range of metrics to evaluate language model outputs:\n\n## Response Tone\nAssesses the tone and style of the generated response.\n\n## Answer Relevance\nMeasures the degree to which the generated content directly addresses and pertains to the specific question or prompt provided by the user.\n\n## Factual Consistency\nMeasures the extent of the model hallucinating i.e. model is making up a response based on its imagination or response is grounded in the context supplied.\n\n## Conceptual Similarity\nMeasures the extent to which the model response aligns with and reflects the underlying ideas or concepts present in the provided context or prompt.\n\n## Readability\nAssesses whether the model response can be read and understood by the intended audience, taking into account factors such as vocabulary complexity, sentence structure, and overall clarity.\n\n## Coherence\nEvaluates how well the model generates coherent and logical responses that align with the context of the question.\n\n## Clarity\nAssesses the response's clarity in terms of language and structure, based on grammar, readability, concise sentences and words, and less redundancy or diversity.\n\n## Diversity\nAssesses the diversity of vocabulary used in a piece of text.\n\n## Creativity\nAssesses the ability of the model to generate imaginative, and novel responses that extend beyond standard or expected answers.\n\n## Narrative Continuity\nMeasures the consistency and logical flow of the response throughout the generated text, ensuring that the progression of events remains coherent and connected.\n\n## Grammatical Correctness\nChecks whether the model response adherence to the rules of syntax, is free from errors and follows the conventions of the target language.\n\n## Prompt Injection\nEvaluates the susceptibility of language models or AI systems to adversarial prompts that manipulate or alter the system's intended behavior.\n\n## Data Leakage\nMeasures the extent to which sensitive or unintended information is exposed during model training or inference.\n\n## Insecure Output\nDetects whether the response contains insecure or dangerous code patterns that could lead to potential security vulnerabilities.\n\n## Invisible Text\nEvaluates if the input contains invisible or non-printable characters that might be used maliciously to hide information or manipulate the model's behavior.\n\n## Toxicity\nEvaluates the level of harmful or toxic language present in a given text.\n\n## BLEU Score\nMeasures the quality of text generated by models by comparing it to one or more reference texts.\n\n## Compression Score\nMeasures the ratio of the length of the generated summary to the length of the original text.\n\n## Cosine Similarity Score\nMeasures the similarity between the original text and the generated summary by treating both as vectors in a multi-dimensional space.\n\n## Fuzzy Score\nMeasures the similarity between two pieces of text based on approximate matching rather than exact matching.\n\n## METEOR Score\nEvaluates the quality of generated summaries by comparing them to reference summaries, considering matches at the level of unigrams and accounting for synonyms and stemming.\n\n## ROUGE Score\nA set of metrics used to evaluate the quality of generated summaries by comparing them to one or more reference summaries.\n\n\n## Advanced Usage\n\n### Custom Configurations\n\nYou can provide custom configurations for metrics:\n\n```python\nmetrics_config = {\n    \"response_tone_config\": {\n        \"threshold\": 0.5,\n        \"custom_labels\": [\"Negative\", \"Neutral\", \"Positive\"],\n        \"label_thresholds\": [0, 0.5, 0.7, 1]\n    }\n}\n\nresults = inspeq_eval.evaluate_llm_task(\n    metrics_list=[\"RESPONSE_TONE\"],\n    input_data=input_data,\n    task_name=\"custom_config_task\",\n    metrics_config=metrics_config\n)\n```\n\n## Error Handling\n\nThe SDK uses custom exceptions for different types of errors:\n\n- **APIError:** For API related issues\n- **ConfigError:** For invalid config related issues\n- **InputError:** For invalid input data\n\n## Additional Resources\n\nFor detailed API documentation, visit [Inspeq Documentation](https://docs.inspeq.ai).\nFor support or questions, contact our support team through the Inspeq App.\n\n## License\n\nThis SDK is distributed under the terms of the Apache License 2.0.\n\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Inspeq AI Python SDK",
    "version": "1.0.30",
    "project_urls": {
        "Documentation": "https://docs.inspeq.ai",
        "Source": "https://github.com/inspeq/inspeq-py-sdk"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bc113004e0f615833abf883d622847218db6a201744da5cdb930899b4e2857eb",
                "md5": "3c1fabfe5415628a5b7e490867d96085",
                "sha256": "456322d14605cf03541c9ad484c336479ccd04ff5fabc0c3c7bcc9738762306d"
            },
            "downloads": -1,
            "filename": "inspeqai-1.0.30.tar.gz",
            "has_sig": false,
            "md5_digest": "3c1fabfe5415628a5b7e490867d96085",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 9924,
            "upload_time": "2024-09-25T15:27:32",
            "upload_time_iso_8601": "2024-09-25T15:27:32.782479Z",
            "url": "https://files.pythonhosted.org/packages/bc/11/3004e0f615833abf883d622847218db6a201744da5cdb930899b4e2857eb/inspeqai-1.0.30.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-25 15:27:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "inspeq",
    "github_project": "inspeq-py-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "certifi",
            "specs": [
                [
                    "==",
                    "2023.11.17"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    "==",
                    "3.3.2"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.6"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    "==",
                    "1.0.1"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.31.0"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    "==",
                    "2.2.0"
                ]
            ]
        }
    ],
    "lcname": "inspeqai"
}
        
Elapsed time: 0.38082s