giskard


Namegiskard JSON
Version 2.10.0 PyPI version JSON
download
home_pageNone
SummaryThe testing framework dedicated to ML models, from tabular to LLMs
upload_time2024-04-10 17:18:31
maintainerNone
docs_urlNone
authorNone
requires_python<3.12,>=3.9
licenseApache Software License 2.0
keywords artificial intelligence machine learning quality mlops
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            <p align="center">
  <img alt="giskardlogo" src="https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/giskard_logo.png#gh-light-mode-only">

</p>
<h1 align="center" weight='300' >The testing & evaluation framework for LLMs & other AI models</h1>
<h3 align="center" weight='300' >Scan AI models to detect risks of performance issues. In 4 lines of code. </h3>
<div align="center">

  [![GitHub release](https://img.shields.io/github/v/release/Giskard-AI/giskard)](https://github.com/Giskard-AI/giskard/releases)
  [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://github.com/Giskard-AI/giskard/blob/main/LICENSE)
  [![CI](https://github.com/Giskard-AI/giskard/actions/workflows/build-python.yml/badge.svg?branch=main)](https://github.com/Giskard-AI/giskard/actions/workflows/build-python.yml?query=branch%3Amain)
  [![Sonar](https://sonarcloud.io/api/project_badges/measure?project=giskard&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=giskard)
  [![Giskard on Discord](https://img.shields.io/discord/939190303397666868?label=Discord)](https://gisk.ar/discord)

  <a rel="me" href="https://fosstodon.org/@Giskard"></a>

</div>
<h3 align="center">
   <a href="https://docs.giskard.ai/en/latest/index.html"><b>Docs</b></a> &bull;
   <a href="https://www.giskard.ai/knowledge-categories/news/?utm_source=github&utm_medium=github&utm_campaign=github_readme&utm_id=readmeblog"><b>Blog</b></a> &bull;
  <a href="https://www.giskard.ai/?utm_source=github&utm_medium=github&utm_campaign=github_readme&utm_id=readmeblog"><b>Website</b></a> &bull;
  <a href="https://gisk.ar/discord"><b>Discord</b></a>
 </h3>
<br />

## Install Giskard 🐢
Install the latest version of Giskard from PyPi using pip:
```sh
pip install "giskard[llm]" -U
```
We officially support Python 3.9, 3.10 and 3.11.
## Try in Colab 📙
[Open Colab notebook](https://colab.research.google.com/github/giskard-ai/giskard/blob/main/docs/getting_started/quickstart/quickstart_llm.ipynb)

______________________________________________________________________

Giskard is a Python library that **automatically detects performance issues and evaluates** AI applications. The library covers LLM-based applications such as RAG agents, all the way to traditional ML models for tabular data. 

## Scan: Automatically assess your LLM-based agents for performance issues & vulnerabilities ⤵️

Issues detected include: 
- Hallucinations
- Harmful content generation
- Prompt injection
- Robustness issues
- Sensitive information disclosure
- Stereotypes & discrimination
- many more...

<p align="center">
  <img src="https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/scan_updates.gif" alt="Scan Example" width="800">
</p>

## RAG Evaluation Toolkit (RAGET): Automatically generate evaluation datasets & evaluate RAG application answers ⤵️

If you're testing a RAG application, you can get an even more in-depth assessment using **RAGET**, Giskard's RAG Evaluation Toolkit.

- **RAGET** can generate automatically a list of `question`, `reference_answer` and `reference_context` from the knowledge base of the RAG. You can then use this generated test set to evaluate your RAG agent.
- **RAGET** computes scores *for each component of the RAG agent*. The scores are computed by aggregating the correctness of the agent’s answers on different question types.

  - Here is the list of components evaluated with **RAGET**:
    - `Generator`: the LLM used inside the RAG to generate the answers
    - `Retriever`: fetch relevant documents from the knowledge base according to a user query
    - `Rewriter`: rewrite the user query to make it more relevant to the knowledge base or to account for chat history
    - `Router`: filter the query of the user based on his intentions
    - `Knowledge Base`: the set of documents given to the RAG to generate the answers
  
<p align="center">
  <img src="https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/RAGET_updated.gif" alt="Test Suite Example" width="800">
</p>


Giskard works with any model, in any environment and integrates seamlessly with your favorite tools ⤵️ <br/>
 
<p align="center">
  <img width='600' src="https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/tools_updated.png">
</p>
<br/>

# Contents

- 🤸‍♀️ **[Quickstart](#quickstart)**
    - **1**. 🏗️ [Build a LLM agent](#build-a-llm-agent)
    - **2**. 🔎 [Scan your model for issues](#scan-your-model-for-issues)
    - **3**. 🪄 [Automatically generate an evaluation dataset for your RAG applications](#automatically-generate-an-evaluation-dataset-for-your-rag-applications)
- 👋 **[Community](#community)**

<h1 id="quickstart">🤸‍♀️ Quickstart</h1>

<h2 id="build-a-llm-agent">1. 🏗️ Build a LLM agent</h2>

Let's build an agent that answers questions about climate change, based on the 2023 Climate Change Synthesis Report by the IPCC.

Before starting let's install the required libraries:
```sh
pip install langchain tiktoken "pypdf<=3.17.0"
```


```python
from langchain import OpenAI, FAISS, PromptTemplate
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import PyPDFLoader
from langchain.chains import RetrievalQA
from langchain.text_splitter import RecursiveCharacterTextSplitter

# Prepare vector store (FAISS) with IPPC report
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100, add_start_index=True)
loader = PyPDFLoader("https://www.ipcc.ch/report/ar6/syr/downloads/report/IPCC_AR6_SYR_LongerReport.pdf")
db = FAISS.from_documents(loader.load_and_split(text_splitter), OpenAIEmbeddings())

# Prepare QA chain
PROMPT_TEMPLATE = """You are the Climate Assistant, a helpful AI assistant made by Giskard.
Your task is to answer common questions on climate change.
You will be given a question and relevant excerpts from the IPCC Climate Change Synthesis Report (2023).
Please provide short and clear answers based on the provided context. Be polite and helpful.

Context:
{context}

Question:
{question}

Your answer:
"""

llm = OpenAI(model="gpt-3.5-turbo-instruct", temperature=0)
prompt = PromptTemplate(template=PROMPT_TEMPLATE, input_variables=["question", "context"])
climate_qa_chain = RetrievalQA.from_llm(llm=llm, retriever=db.as_retriever(), prompt=prompt)
```

<h2 id="scan-your-model-for-issues">2. 🔎 Scan your model for issues</h2>

Next, wrap your agent to prepare it for Giskard's scan:

```python
import giskard
import pandas as pd

def model_predict(df: pd.DataFrame):
    """Wraps the LLM call in a simple Python function.

    The function takes a pandas.DataFrame containing the input variables needed
    by your model, and must return a list of the outputs (one for each row).
    """
    return [climate_qa_chain.run({"query": question}) for question in df["question"]]

# Don’t forget to fill the `name` and `description`: they are used by Giskard
# to generate domain-specific tests.
giskard_model = giskard.Model(
    model=model_predict,
    model_type="text_generation",
    name="Climate Change Question Answering",
    description="This model answers any question about climate change based on IPCC reports",
    feature_names=["question"],
)
```

✨✨✨Then run Giskard's magical scan✨✨✨
```python
scan_results = giskard.scan(giskard_model)
```
Once the scan completes, you can display the results directly in your notebook:

```python
display(scan_results)

# Or save it to a file
scan_results.to_html("scan_results.html")
```

*If you're facing issues, check out our [docs](https://docs.giskard.ai/en/stable/open_source/scan/scan_llm/index.html) for more information.*

<h2 id="automatically-generate-an-evaluation-dataset-for-your-rag-applications">3. 🪄 Automatically generate an evaluation dataset for your RAG applications</h2>

If the scan found issues in your model, you can automatically extract an evaluation dataset based on the vulnerabilities found:

```python
test_suite = scan_results.generate_test_suite("My first test suite")
```

By default, RAGET automatically generates 6 different question types (these can be selected if needed, see advanced question generation). The total number of questions is divided equally between each question type. To make the question generation more relevant and accurate, you can also provide a description of your agent.

```python

from giskard.rag import generate_testset, KnowledgeBase

# Load your data and initialize the KnowledgeBase
df = pd.read_csv("path/to/your/knowledge_base.csv")

knowledge_base = KnowledgeBase.from_pandas(df, columns=["column_1", "column_2"])

# Generate a testset with 10 questions & answers for each question types (this will take a while)
testset = generate_testset(
    knowledge_base, 
    num_questions=60,
    language='en',  # optional, we'll auto detect if not provided
    agent_description="A customer support chatbot for company X", # helps generating better questions
)
```

Depending on how many questions you generate, this can take a while. Once you’re done, you can save this generated test set for future use:

```python
# Save the generated testset
testset.save("my_testset.jsonl")
```
You can easily load it back

```python
from giskard.rag import QATestset

loaded_testset = QATestset.load("my_testset.jsonl")

# Convert it to a pandas dataframe
df = loaded_testset.to_pandas()
```

Here’s an example of a generated question:

| question                               | reference_context                                                                                                                                                     | reference_answer                                             | metadata                                               |
|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|-------------------------------------------------------|
| For which countries can I track my shipping? | Document 1: We offer free shipping on all orders over $50. For orders below $50, we charge a flat rate of $5.99. We offer shipping services to customers residing in all 50 states of the US, in addition to providing delivery options to Canada and Mexico. Document 2: Once your purchase has been successfully confirmed and shipped, you will receive a confirmation email containing your tracking number. You can simply click on the link provided in the email or visit our website’s order tracking page. | We ship to all 50 states in the US, as well as to Canada and Mexico. We offer tracking for all our shippings. | `{"question_type": "simple", "seed_document_id": 1, "topic": "Shipping policy"}` |

Each row of the test set contains 5 columns:

- `question`: the generated question
- `reference_context`: the context that can be used to answer the question
- `reference_answer`: the answer to the question (generated with GPT-4)
- `conversation_history`: not shown in the table above, contain the history of the conversation with the agent as a list, only relevant for conversational question, otherwise it contains an empty list.
- `metadata`: a dictionary with various metadata about the question, this includes the question_type, seed_document_id the id of the document used to generate the question and the topic of the question

<h1 id="community">👋 Community</h1>
We welcome contributions from the AI community! Read this [guide](CONTRIBUTING.md) to get started.

Join our thriving community on our Discord server: [join Discord server](https://gisk.ar/discord)

🌟 [Leave us a star](https://github.com/Giskard-AI/giskard), it helps the project to get discovered by others and keeps us motivated to build awesome open-source tools! 🌟

❤️ You can also [sponsor us](https://github.com/sponsors/Giskard-AI) on GitHub. With a monthly sponsor subscription, you can get a sponsor badge and get your bug reports prioritized. We also offer one-time sponsoring if you want us to get involved in a consulting project, run a workshop, or give a talk at your company.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "giskard",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.9",
    "maintainer_email": null,
    "keywords": "Artificial Intelligence, Machine Learning, Quality, MLOps",
    "author": null,
    "author_email": "Giskard AI <hello@giskard.ai>",
    "download_url": "https://files.pythonhosted.org/packages/da/55/adb51dbe132990f00955989eee24a421a88d9fcabf770ecd78e52db3a566/giskard-2.10.0.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img alt=\"giskardlogo\" src=\"https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/giskard_logo.png#gh-light-mode-only\">\n\n</p>\n<h1 align=\"center\" weight='300' >The testing & evaluation framework for LLMs & other AI models</h1>\n<h3 align=\"center\" weight='300' >Scan AI models to detect risks of performance issues. In 4 lines of code. </h3>\n<div align=\"center\">\n\n  [![GitHub release](https://img.shields.io/github/v/release/Giskard-AI/giskard)](https://github.com/Giskard-AI/giskard/releases)\n  [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://github.com/Giskard-AI/giskard/blob/main/LICENSE)\n  [![CI](https://github.com/Giskard-AI/giskard/actions/workflows/build-python.yml/badge.svg?branch=main)](https://github.com/Giskard-AI/giskard/actions/workflows/build-python.yml?query=branch%3Amain)\n  [![Sonar](https://sonarcloud.io/api/project_badges/measure?project=giskard&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=giskard)\n  [![Giskard on Discord](https://img.shields.io/discord/939190303397666868?label=Discord)](https://gisk.ar/discord)\n\n  <a rel=\"me\" href=\"https://fosstodon.org/@Giskard\"></a>\n\n</div>\n<h3 align=\"center\">\n   <a href=\"https://docs.giskard.ai/en/latest/index.html\"><b>Docs</b></a> &bull;\n   <a href=\"https://www.giskard.ai/knowledge-categories/news/?utm_source=github&utm_medium=github&utm_campaign=github_readme&utm_id=readmeblog\"><b>Blog</b></a> &bull;\n  <a href=\"https://www.giskard.ai/?utm_source=github&utm_medium=github&utm_campaign=github_readme&utm_id=readmeblog\"><b>Website</b></a> &bull;\n  <a href=\"https://gisk.ar/discord\"><b>Discord</b></a>\n </h3>\n<br />\n\n## Install Giskard \ud83d\udc22\nInstall the latest version of Giskard from PyPi using pip:\n```sh\npip install \"giskard[llm]\" -U\n```\nWe officially support Python 3.9, 3.10 and 3.11.\n## Try in Colab \ud83d\udcd9\n[Open Colab notebook](https://colab.research.google.com/github/giskard-ai/giskard/blob/main/docs/getting_started/quickstart/quickstart_llm.ipynb)\n\n______________________________________________________________________\n\nGiskard is a Python library that **automatically detects performance issues and evaluates** AI applications. The library covers LLM-based applications such as RAG agents, all the way to traditional ML models for tabular data. \n\n## Scan: Automatically assess your LLM-based agents for performance issues & vulnerabilities \u2935\ufe0f\n\nIssues detected include: \n- Hallucinations\n- Harmful content generation\n- Prompt injection\n- Robustness issues\n- Sensitive information disclosure\n- Stereotypes & discrimination\n- many more...\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/scan_updates.gif\" alt=\"Scan Example\" width=\"800\">\n</p>\n\n## RAG Evaluation Toolkit (RAGET): Automatically generate evaluation datasets & evaluate RAG application answers \u2935\ufe0f\n\nIf you're testing a RAG application, you can get an even more in-depth assessment using **RAGET**, Giskard's RAG Evaluation Toolkit.\n\n- **RAGET** can generate automatically a list of `question`, `reference_answer` and `reference_context` from the knowledge base of the RAG. You can then use this generated test set to evaluate your RAG agent.\n- **RAGET** computes scores *for each component of the RAG agent*. The scores are computed by aggregating the correctness of the agent\u2019s answers on different question types.\n\n  - Here is the list of components evaluated with **RAGET**:\n    - `Generator`: the LLM used inside the RAG to generate the answers\n    - `Retriever`: fetch relevant documents from the knowledge base according to a user query\n    - `Rewriter`: rewrite the user query to make it more relevant to the knowledge base or to account for chat history\n    - `Router`: filter the query of the user based on his intentions\n    - `Knowledge Base`: the set of documents given to the RAG to generate the answers\n  \n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/RAGET_updated.gif\" alt=\"Test Suite Example\" width=\"800\">\n</p>\n\n\nGiskard works with any model, in any environment and integrates seamlessly with your favorite tools \u2935\ufe0f <br/>\n \n<p align=\"center\">\n  <img width='600' src=\"https://raw.githubusercontent.com/giskard-ai/giskard/main/readme/tools_updated.png\">\n</p>\n<br/>\n\n# Contents\n\n- \ud83e\udd38\u200d\u2640\ufe0f **[Quickstart](#quickstart)**\n    - **1**. \ud83c\udfd7\ufe0f [Build a LLM agent](#build-a-llm-agent)\n    - **2**. \ud83d\udd0e [Scan your model for issues](#scan-your-model-for-issues)\n    - **3**. \ud83e\ude84 [Automatically generate an evaluation dataset for your RAG applications](#automatically-generate-an-evaluation-dataset-for-your-rag-applications)\n- \ud83d\udc4b **[Community](#community)**\n\n<h1 id=\"quickstart\">\ud83e\udd38\u200d\u2640\ufe0f Quickstart</h1>\n\n<h2 id=\"build-a-llm-agent\">1. \ud83c\udfd7\ufe0f Build a LLM agent</h2>\n\nLet's build an agent that answers questions about climate change, based on the 2023 Climate Change Synthesis Report by the IPCC.\n\nBefore starting let's install the required libraries:\n```sh\npip install langchain tiktoken \"pypdf<=3.17.0\"\n```\n\n\n```python\nfrom langchain import OpenAI, FAISS, PromptTemplate\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.document_loaders import PyPDFLoader\nfrom langchain.chains import RetrievalQA\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n\n# Prepare vector store (FAISS) with IPPC report\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100, add_start_index=True)\nloader = PyPDFLoader(\"https://www.ipcc.ch/report/ar6/syr/downloads/report/IPCC_AR6_SYR_LongerReport.pdf\")\ndb = FAISS.from_documents(loader.load_and_split(text_splitter), OpenAIEmbeddings())\n\n# Prepare QA chain\nPROMPT_TEMPLATE = \"\"\"You are the Climate Assistant, a helpful AI assistant made by Giskard.\nYour task is to answer common questions on climate change.\nYou will be given a question and relevant excerpts from the IPCC Climate Change Synthesis Report (2023).\nPlease provide short and clear answers based on the provided context. Be polite and helpful.\n\nContext:\n{context}\n\nQuestion:\n{question}\n\nYour answer:\n\"\"\"\n\nllm = OpenAI(model=\"gpt-3.5-turbo-instruct\", temperature=0)\nprompt = PromptTemplate(template=PROMPT_TEMPLATE, input_variables=[\"question\", \"context\"])\nclimate_qa_chain = RetrievalQA.from_llm(llm=llm, retriever=db.as_retriever(), prompt=prompt)\n```\n\n<h2 id=\"scan-your-model-for-issues\">2. \ud83d\udd0e Scan your model for issues</h2>\n\nNext, wrap your agent to prepare it for Giskard's scan:\n\n```python\nimport giskard\nimport pandas as pd\n\ndef model_predict(df: pd.DataFrame):\n    \"\"\"Wraps the LLM call in a simple Python function.\n\n    The function takes a pandas.DataFrame containing the input variables needed\n    by your model, and must return a list of the outputs (one for each row).\n    \"\"\"\n    return [climate_qa_chain.run({\"query\": question}) for question in df[\"question\"]]\n\n# Don\u2019t forget to fill the `name` and `description`: they are used by Giskard\n# to generate domain-specific tests.\ngiskard_model = giskard.Model(\n    model=model_predict,\n    model_type=\"text_generation\",\n    name=\"Climate Change Question Answering\",\n    description=\"This model answers any question about climate change based on IPCC reports\",\n    feature_names=[\"question\"],\n)\n```\n\n\u2728\u2728\u2728Then run Giskard's magical scan\u2728\u2728\u2728\n```python\nscan_results = giskard.scan(giskard_model)\n```\nOnce the scan completes, you can display the results directly in your notebook:\n\n```python\ndisplay(scan_results)\n\n# Or save it to a file\nscan_results.to_html(\"scan_results.html\")\n```\n\n*If you're facing issues, check out our [docs](https://docs.giskard.ai/en/stable/open_source/scan/scan_llm/index.html) for more information.*\n\n<h2 id=\"automatically-generate-an-evaluation-dataset-for-your-rag-applications\">3. \ud83e\ude84 Automatically generate an evaluation dataset for your RAG applications</h2>\n\nIf the scan found issues in your model, you can automatically extract an evaluation dataset based on the vulnerabilities found:\n\n```python\ntest_suite = scan_results.generate_test_suite(\"My first test suite\")\n```\n\nBy default, RAGET automatically generates 6 different question types (these can be selected if needed, see advanced question generation). The total number of questions is divided equally between each question type. To make the question generation more relevant and accurate, you can also provide a description of your agent.\n\n```python\n\nfrom giskard.rag import generate_testset, KnowledgeBase\n\n# Load your data and initialize the KnowledgeBase\ndf = pd.read_csv(\"path/to/your/knowledge_base.csv\")\n\nknowledge_base = KnowledgeBase.from_pandas(df, columns=[\"column_1\", \"column_2\"])\n\n# Generate a testset with 10 questions & answers for each question types (this will take a while)\ntestset = generate_testset(\n    knowledge_base, \n    num_questions=60,\n    language='en',  # optional, we'll auto detect if not provided\n    agent_description=\"A customer support chatbot for company X\", # helps generating better questions\n)\n```\n\nDepending on how many questions you generate, this can take a while. Once you\u2019re done, you can save this generated test set for future use:\n\n```python\n# Save the generated testset\ntestset.save(\"my_testset.jsonl\")\n```\nYou can easily load it back\n\n```python\nfrom giskard.rag import QATestset\n\nloaded_testset = QATestset.load(\"my_testset.jsonl\")\n\n# Convert it to a pandas dataframe\ndf = loaded_testset.to_pandas()\n```\n\nHere\u2019s an example of a generated question:\n\n| question                               | reference_context                                                                                                                                                     | reference_answer                                             | metadata                                               |\n|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|-------------------------------------------------------|\n| For which countries can I track my shipping? | Document 1: We offer free shipping on all orders over $50. For orders below $50, we charge a flat rate of $5.99. We offer shipping services to customers residing in all 50 states of the US, in addition to providing delivery options to Canada and Mexico. Document 2: Once your purchase has been successfully confirmed and shipped, you will receive a confirmation email containing your tracking number. You can simply click on the link provided in the email or visit our website\u2019s order tracking page. | We ship to all 50 states in the US, as well as to Canada and Mexico. We offer tracking for all our shippings. | `{\"question_type\": \"simple\", \"seed_document_id\": 1, \"topic\": \"Shipping policy\"}` |\n\nEach row of the test set contains 5 columns:\n\n- `question`: the generated question\n- `reference_context`: the context that can be used to answer the question\n- `reference_answer`: the answer to the question (generated with GPT-4)\n- `conversation_history`: not shown in the table above, contain the history of the conversation with the agent as a list, only relevant for conversational question, otherwise it contains an empty list.\n- `metadata`: a dictionary with various metadata about the question, this includes the question_type, seed_document_id the id of the document used to generate the question and the topic of the question\n\n<h1 id=\"community\">\ud83d\udc4b Community</h1>\nWe welcome contributions from the AI community! Read this [guide](CONTRIBUTING.md) to get started.\n\nJoin our thriving community on our Discord server: [join Discord server](https://gisk.ar/discord)\n\n\ud83c\udf1f [Leave us a star](https://github.com/Giskard-AI/giskard), it helps the project to get discovered by others and keeps us motivated to build awesome open-source tools! \ud83c\udf1f\n\n\u2764\ufe0f You can also [sponsor us](https://github.com/sponsors/Giskard-AI) on GitHub. With a monthly sponsor subscription, you can get a sponsor badge and get your bug reports prioritized. We also offer one-time sponsoring if you want us to get involved in a consulting project, run a workshop, or give a talk at your company.\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "The testing framework dedicated to ML models, from tabular to LLMs",
    "version": "2.10.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/Giskard-AI/giskard/issues",
        "Discord": "https://discord.gg/ABvfpbu69R",
        "Documentation": "https://docs.giskard.ai/",
        "Homepage": "https://giskard.ai",
        "Linkedin": "https://www.linkedin.com/company/giskard-ai",
        "Mastodon": "https://fosstodon.org/@Giskard",
        "Source Code": "https://github.com/Giskard-AI/giskard",
        "Twitter": "https://twitter.com/giskard_ai"
    },
    "split_keywords": [
        "artificial intelligence",
        " machine learning",
        " quality",
        " mlops"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ff80a7bbd5c4e8be7a0178269747dbebd4a2a339b4300604014de8fe2153e70b",
                "md5": "cb1293becc9876cef97eca647348805d",
                "sha256": "7bf7aca2d04de997856f476e4779db0dc777489f09351c7876c9ab056e13c79a"
            },
            "downloads": -1,
            "filename": "giskard-2.10.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cb1293becc9876cef97eca647348805d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.9",
            "size": 577486,
            "upload_time": "2024-04-10T17:18:28",
            "upload_time_iso_8601": "2024-04-10T17:18:28.438265Z",
            "url": "https://files.pythonhosted.org/packages/ff/80/a7bbd5c4e8be7a0178269747dbebd4a2a339b4300604014de8fe2153e70b/giskard-2.10.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "da55adb51dbe132990f00955989eee24a421a88d9fcabf770ecd78e52db3a566",
                "md5": "eaa46caaf47a5c6be42eb2840a371dfc",
                "sha256": "1b03a87bbc28e64c56cbeeabf6b00ec525f5ac97e95061c460c734b7622d3e96"
            },
            "downloads": -1,
            "filename": "giskard-2.10.0.tar.gz",
            "has_sig": false,
            "md5_digest": "eaa46caaf47a5c6be42eb2840a371dfc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.9",
            "size": 502438,
            "upload_time": "2024-04-10T17:18:31",
            "upload_time_iso_8601": "2024-04-10T17:18:31.213399Z",
            "url": "https://files.pythonhosted.org/packages/da/55/adb51dbe132990f00955989eee24a421a88d9fcabf770ecd78e52db3a566/giskard-2.10.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-10 17:18:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Giskard-AI",
    "github_project": "giskard",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "giskard"
}
        
Elapsed time: 0.26414s