inspeqai


Nameinspeqai JSON
Version 1.0.19 PyPI version JSON
download
home_pageNone
SummaryInspeq AI SDK
upload_time2024-04-04 18:10:55
maintainerNone
docs_urlNone
authorInspeq
requires_python>=3.10
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Inspeqai python SDK

- **Website:** https://www.inspeq.ai
- **Inspeq app:** https://app.inspeq.ai
- **Detailed Documentation:** https://docs.inspeq.ai

## Quikstart

### Create a Virtual Environment in Linux and Windows

### Linux OS / MAC OS

### Using venv (Python 3)

1. Open a terminal.
2. Navigate to the directory where you want to create the virtual environment.
3. Run the following command:

```bash
   python3 -m venv venv
```

#### Activate it

```bash
  source venv/bin/activate

```

### windows

1. Open a terminal.
2. Navigate to the directory where you want to create the virtual environment.
3. Run the following command:

```bash
   python -m venv venv
```

#### Activate it

```bash
venv\Scripts\activate
```

#### Make sure your environment is activated everytime you use package

### SDK Installation

Enter below Command in terminal

```sh
pip install inspeqai
```

### Get SDK API keys

Get your API keys from <a href="https://app.inspeq.ai/" target="_blank">Here</a>

### Usage

Create main.py and you can use below code snippet

```py

from inspeq.client import Evaluator

#initialization
API_KEY = "your_sdk_api_key"
inspeq_instance = Evaluator(sdk_api_key=API_KEY)

# Example input data
input_data={
    "prompt":"llm_prompt",
     "response":" llm_output "
  }

'''Note : Do not change the structure of input data keep the structure as it
is. Put your data at places of llm_prompt, llm_output
and your_llm_output .

'''
print("Word limit test :", inspeq_instance.word_limit_test(input_data))


```

#### Get all metrics

```py

from inspeq.client import Evaluator

#initialization
API_KEY = "your_sdk_api_key"
inspeq_instance = Evaluator(sdk_api_key=API_KEY)

# Example input data
# three parameters are required for get_all_metrics you can see below ,do not change structure inside the input data
input_data={
    "prompt":"your_llm_prompt",
    "context":"your_llm_context",
     "response":"your_llm_output "
  }

'''Note : Do not change the structure of input data keep the structure as it
is you need to include prompt,context,response as it is . Put your data at places of your_llm_prompt, your_llm_context
and your_llm_output .

'''
#get all metrics in one function

print(inspeq_instance.get_all_metrics(input_data))

```

After you run the file all metrics result will print in your terminal or output window.

### All Metrics provided by Inspeq sdk

Different metrics required different parameters you can visit official documentation

Click <a href="https://docs.inspeq.ai/" target="_blank">Here</a>

### Supported Features

Metrices:

- Factual Consistency:
  Factual Consistency (FC) pertains to the precision and correctness of information articulated in text produced by Large Language Models (LLMs). It involves the comparison of generated information with the given context, input, or anticipated factual knowledge.

- Do Not Use Keywords:
  Test the List of keywords that should not be present in the response.
  
- Answer Relevance:
  Answer Relevance assesses the alignment between the model's responses and the intended meaning of the input.

- Word Limit Test:
  Check if the generated text adheres to specified word limits.

- Response Tonality:
  Tonality refers to the type of tone or overall sentiment highlighted in the response.

- Conceptual Similarity:
  This refers to the semantic similarity or relatedness between response generated and provided context.

- Coherence:
  The ability of the LLM to generate text that is organized, well-structured, and easy to understand.

- Readibility:
  Readability scores help assess whether the LLM’s generated text is appropriate for the target audience’s reading level.

- Clarity:
  Clarity is a subjective metric and refers to the response’s clarity in terms of language and structure.

- Model Refusal:
  Model refusal detects whether the model responds with a refusal response or not. Example of a refusal response - "I'm sorry, but I cannot provide you with a credit card number. It is against ethical and legal guidelines to share such sensitive information. If you have any other questions or need assistance with something else, feel free to ask."

- Data Leakage:
  Data leakage detects whether the model response contains any personal information such as credit card numbers, phone numbers, emails, urls etc.

- Creativity:
  Creativity is also a subjective concept, especially in AI-generated content. LLMs can be very creative but the results are mostly evaluated by humans. For our story generation and document summarization use cases, we define this metric as a combination of different metrics that could provide a more comprehensive evaluation. We use lexical diversity score, contextual similarity score and hallucination score to evaluate creativity.

- Diversity:
  Lexical diversity metrics assess the diversity of vocabulary used in a piece of text. Higher lexical diversity generally indicates a broader range of words and can contribute to more natural-sounding language.

- Narrative Continuity:
  Narrative continuity metric is a measure that evaluates whether a generated response maintains coherence and logical flow with the preceding narrative, without introducing abrupt or illogical shifts (ex.- story jumps). It analyzes factors like topic consistency, event/character continuity, and overall coherence to detect discontinuities in the narrative.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "inspeqai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Inspeq",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/85/de/d82f79d47bee750d57bbb26269d5fc746991305fd9c5bd50e39a80f0d99b/inspeqai-1.0.19.tar.gz",
    "platform": null,
    "description": "# Inspeqai python SDK\n\n- **Website:** https://www.inspeq.ai\n- **Inspeq app:** https://app.inspeq.ai\n- **Detailed Documentation:** https://docs.inspeq.ai\n\n## Quikstart\n\n### Create a Virtual Environment in Linux and Windows\n\n### Linux OS / MAC OS\n\n### Using venv (Python 3)\n\n1. Open a terminal.\n2. Navigate to the directory where you want to create the virtual environment.\n3. Run the following command:\n\n```bash\n   python3 -m venv venv\n```\n\n#### Activate it\n\n```bash\n  source venv/bin/activate\n\n```\n\n### windows\n\n1. Open a terminal.\n2. Navigate to the directory where you want to create the virtual environment.\n3. Run the following command:\n\n```bash\n   python -m venv venv\n```\n\n#### Activate it\n\n```bash\nvenv\\Scripts\\activate\n```\n\n#### Make sure your environment is activated everytime you use package\n\n### SDK Installation\n\nEnter below Command in terminal\n\n```sh\npip install inspeqai\n```\n\n### Get SDK API keys\n\nGet your API keys from <a href=\"https://app.inspeq.ai/\" target=\"_blank\">Here</a>\n\n### Usage\n\nCreate main.py and you can use below code snippet\n\n```py\n\nfrom inspeq.client import Evaluator\n\n#initialization\nAPI_KEY = \"your_sdk_api_key\"\ninspeq_instance = Evaluator(sdk_api_key=API_KEY)\n\n# Example input data\ninput_data={\n    \"prompt\":\"llm_prompt\",\n     \"response\":\" llm_output \"\n  }\n\n'''Note : Do not change the structure of input data keep the structure as it\nis. Put your data at places of llm_prompt, llm_output\nand your_llm_output .\n\n'''\nprint(\"Word limit test :\", inspeq_instance.word_limit_test(input_data))\n\n\n```\n\n#### Get all metrics\n\n```py\n\nfrom inspeq.client import Evaluator\n\n#initialization\nAPI_KEY = \"your_sdk_api_key\"\ninspeq_instance = Evaluator(sdk_api_key=API_KEY)\n\n# Example input data\n# three parameters are required for get_all_metrics you can see below ,do not change structure inside the input data\ninput_data={\n    \"prompt\":\"your_llm_prompt\",\n    \"context\":\"your_llm_context\",\n     \"response\":\"your_llm_output \"\n  }\n\n'''Note : Do not change the structure of input data keep the structure as it\nis you need to include prompt,context,response as it is . Put your data at places of your_llm_prompt, your_llm_context\nand your_llm_output .\n\n'''\n#get all metrics in one function\n\nprint(inspeq_instance.get_all_metrics(input_data))\n\n```\n\nAfter you run the file all metrics result will print in your terminal or output window.\n\n### All Metrics provided by Inspeq sdk\n\nDifferent metrics required different parameters you can visit official documentation\n\nClick <a href=\"https://docs.inspeq.ai/\" target=\"_blank\">Here</a>\n\n### Supported Features\n\nMetrices:\n\n- Factual Consistency:\n  Factual Consistency (FC) pertains to the precision and correctness of information articulated in text produced by Large Language Models (LLMs). It involves the comparison of generated information with the given context, input, or anticipated factual knowledge.\n\n- Do Not Use Keywords:\n  Test the List of keywords that should not be present in the response.\n  \n- Answer Relevance:\n  Answer Relevance assesses the alignment between the model's responses and the intended meaning of the input.\n\n- Word Limit Test:\n  Check if the generated text adheres to specified word limits.\n\n- Response Tonality:\n  Tonality refers to the type of tone or overall sentiment highlighted in the response.\n\n- Conceptual Similarity:\n  This refers to the semantic similarity or relatedness between response generated and provided context.\n\n- Coherence:\n  The ability of the LLM to generate text that is organized, well-structured, and easy to understand.\n\n- Readibility:\n  Readability scores help assess whether the LLM\u2019s generated text is appropriate for the target audience\u2019s reading level.\n\n- Clarity:\n  Clarity is a subjective metric and refers to the response\u2019s clarity in terms of language and structure.\n\n- Model Refusal:\n  Model refusal detects whether the model responds with a refusal response or not. Example of a refusal response - \"I'm sorry, but I cannot provide you with a credit card number. It is against ethical and legal guidelines to share such sensitive information. If you have any other questions or need assistance with something else, feel free to ask.\"\n\n- Data Leakage:\n  Data leakage detects whether the model response contains any personal information such as credit card numbers, phone numbers, emails, urls etc.\n\n- Creativity:\n  Creativity is also a subjective concept, especially in AI-generated content. LLMs can be very creative but the results are mostly evaluated by humans. For our story generation and document summarization use cases, we define this metric as a combination of different metrics that could provide a more comprehensive evaluation. We use lexical diversity score, contextual similarity score and hallucination score to evaluate creativity.\n\n- Diversity:\n  Lexical diversity metrics assess the diversity of vocabulary used in a piece of text. Higher lexical diversity generally indicates a broader range of words and can contribute to more natural-sounding language.\n\n- Narrative Continuity:\n  Narrative continuity metric is a measure that evaluates whether a generated response maintains coherence and logical flow with the preceding narrative, without introducing abrupt or illogical shifts (ex.- story jumps). It analyzes factors like topic consistency, event/character continuity, and overall coherence to detect discontinuities in the narrative.\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Inspeq AI SDK",
    "version": "1.0.19",
    "project_urls": {
        "Documentation": "https://docs.inspeq.ai",
        "Source": "https://github.com/inspeq/inspeq-py-sdk"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "85ded82f79d47bee750d57bbb26269d5fc746991305fd9c5bd50e39a80f0d99b",
                "md5": "f4559cad579ed482ead73805149f7204",
                "sha256": "3be4ba87474248f7a081abc15950b31bde48052b163370259bd26b8a7342df9f"
            },
            "downloads": -1,
            "filename": "inspeqai-1.0.19.tar.gz",
            "has_sig": false,
            "md5_digest": "f4559cad579ed482ead73805149f7204",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 11763,
            "upload_time": "2024-04-04T18:10:55",
            "upload_time_iso_8601": "2024-04-04T18:10:55.054386Z",
            "url": "https://files.pythonhosted.org/packages/85/de/d82f79d47bee750d57bbb26269d5fc746991305fd9c5bd50e39a80f0d99b/inspeqai-1.0.19.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-04 18:10:55",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "inspeq",
    "github_project": "inspeq-py-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "inspeqai"
}
        
Elapsed time: 0.33356s