<p align="center">
<div style="position: relative; width: 100%; text-align: center;">
<h1>inDoxJudge</h1>
<a href="https://github.com/osllmai/inDoxJudge">
<img src="https://readme-typing-svg.demolab.com?font=Georgia&size=16&duration=3000&pause=500&multiline=true&width=700&height=100&lines=InDoxJudge;LLM+Evaluation+%7C+RAG+Evaluation+%7C+Safety+Evaluation+%7C+LLM+Comparison;Copyright+©️+OSLLAM.ai" alt="Typing SVG" style="margin-top: 20px;"/>
</a>
</div>
</br>
[![License](https://img.shields.io/github/license/osllmai/inDox)](https://github.com/osllmai/inDox/blob/main/LICENSE)
[![PyPI](https://badge.fury.io/py/indoxJudge.svg)](https://pypi.org/project/IndoxJudge/0.0.10/)
[![Python](https://img.shields.io/pypi/pyversions/indoxJudge.svg)](https://pypi.org/project/indoxJudge/0.0.10/)
[![Downloads](https://static.pepy.tech/badge/indoxJudge)](https://pepy.tech/project/indoxJudge)
[![Discord](https://img.shields.io/discord/1223867382460579961?label=Discord&logo=Discord&style=social)](https://discord.com/invite/ossllmai)
[![GitHub stars](https://img.shields.io/github/stars/osllmai/inDoxJudge?style=social)](https://github.com/osllmai/inDoxJudge)
<p align="center">
<a href="https://osllm.ai">Official Website</a> • <a href="https://docs.osllm.ai/index.html">Documentation</a> • <a href="https://discord.gg/2fftQauwDD">Discord</a>
</p>
<p align="center">
<b>NEW:</b> <a href="https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill">Subscribe to our mailing list</a> for updates and news!
</p>
Welcome to _IndoxJudge_! This repository provides a comprehensive suite of evaluation metrics for assessing the performance and quality of large language models (_LLMs_). Whether you're a researcher, developer, or enthusiast, this toolkit offers essential tools to measure various aspects of _LLMs_, including _knowledge retention_, _bias_, _toxicity_, and more.
<p align="center">
<img src="https://raw.githubusercontent.com/osllmai/indoxJudge/master/docs/assets/IndoxJudge%20Evaluate%20LLMs%20with%20metrics%20%26%20Model%20Safety.png" alt="IndoxJudge Evaluate LLMs with metrics & Model Safety">
</p>
## Overview
_IndoxJudge_ is designed to provide a standardized and extensible framework for evaluating _LLMs_. With a focus on _accuracy_, _fairness_, and _relevancy_, this toolkit supports a wide range of evaluation metrics and is continuously updated to include the latest advancements in the field.
## Features
- **Comprehensive Metrics**: Evaluate _LLMs_ across multiple dimensions, including _accuracy_, _bias_, _toxicity_, and _contextual relevancy_.
- **RAG Evaluation**: Includes specialized metrics for evaluating _retrieval-augmented generation (RAG)_ models.
- **Safety Evaluation**: Assess the _safety_ of model outputs, focusing on _toxicity_, _bias_, and ethical considerations.
- **Extensible Framework**: Easily integrate new metrics or customize existing ones to suit specific needs.
- **User-Friendly Interface**: Intuitive and easy-to-use interface for seamless evaluation.
- **Continuous Updates**: Regular updates to incorporate new metrics and improvements.
## Supported Models
_IndoxJudge_ currently supports the following _LLM_ models:
- **OpenAi**
- **GoogleAi**
- **IndoxApi**
- **HuggingFaceModel**
- **Mistral**
- **Pheonix** # Coming Soon - You may follow the progress [phoenix_cli](https://github.com/osllmai/phoenix_cli) or [phoenix](https://github.com/osllmai/phoenix)
- **Ollama**
## Metrics
_IndoxJudge_ includes the following metrics, with more being added:
- **GEval**: General evaluation metric for _LLMs_.
- **KnowledgeRetention**: Assesses the ability of _LLMs_ to retain factual information.
- **BertScore**: Measures the similarity between generated and reference sentences.
- **Toxicity**: Evaluates the presence of toxic content in model outputs.
- **Bias**: Analyzes the potential biases in _LLM_ outputs.
- **Hallucination**: Identifies instances where the model generates false or misleading information.
- **Faithfulness**: Checks the alignment of generated content with source material.
- **ContextualRelevancy**: Assesses the relevance of responses in context.
- **Rouge**: Measures the overlap of n-grams between generated and reference texts.
- **BLEU**: Evaluates the quality of text generation based on precision.
- **AnswerRelevancy**: Assesses the relevance of answers to questions.
- **METEOR**: Evaluates machine translation quality.
- **Gruen**: Measures the quality of generated text by assessing grammaticality, redundancy, and focus.
- **Overallscore**: Provides an overall evaluation score for _LLMs_ which is a weighted average of multiple metrics.
- **MCDA**: Multi-Criteria Decision Analysis for evaluating _LLMs_.
## Installation
To install _IndoxJudge_, follow these steps:
```bash
git clone https://github.com/yourusername/indoxjudge.git
cd indoxjudge
```
## Setting Up the Python Environment
If you are running this project in your local IDE, please create a Python environment to ensure all dependencies are correctly managed. You can follow the steps below to set up a virtual environment named `indox_judge`:
### Windows
1. **Create the virtual environment:**
```bash
python -m venv indox_judge
```
2. **Activate the virtual environment:**
```bash
indox_judge\Scripts\activate
```
### macOS/Linux
1. **Create the virtual environment:**
```bash
python3 -m venv indox_judge
```
````
2. **Activate the virtual environment:**
```bash
source indox_judge/bin/activate
````
### Install Dependencies
Once the virtual environment is activated, install the required dependencies by running:
```bash
pip install -r requirements.txt
```
## Usage
To use _IndoxJudge_, load your API key, select the model, and choose the evaluation metrics. Here's an example demonstrating how to evaluate a model's response for _faithfulness_:
```python
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Import IndoxJudge and supported models
from indoxJudge.piplines import CustomEvaluator
from indoxJudge.models import OpenAi
from indoxJudge.metrics import Faithfulness
# Initialize the model with your API key
model = OpenAi(api_key=OPENAI_API_KEY,model="gpt-4o")
# Define your query and retrieval context
query = "What are the benefits of a Mediterranean diet?"
retrieval_context = [
"The Mediterranean diet emphasizes eating primarily plant-based foods, such as fruits and vegetables, whole grains, legumes, and nuts. It also includes moderate amounts of fish and poultry, and low consumption of red meat. Olive oil is the main source of fat, providing monounsaturated fats which are beneficial for heart health.",
"Research has shown that the Mediterranean diet can reduce the risk of heart disease, stroke, and type 2 diabetes. It is also associated with improved cognitive function and a lower risk of Alzheimer's disease. The diet's high content of fiber, antioxidants, and healthy fats contributes to its numerous health benefits.",
"A Mediterranean diet has been linked to a longer lifespan and a reduced risk of chronic diseases. It promotes healthy aging and weight management due to its emphasis on whole, unprocessed foods and balanced nutrition."
]
# Obtain the model's response
response = "The Mediterranean diet is known for its health benefits, including reducing the risk of heart disease, stroke, and diabetes. It encourages the consumption of fruits, vegetables, whole grains, nuts, and olive oil, while limiting red meat. Additionally, this diet has been associated with better cognitive function and a reduced risk of Alzheimer's disease, promoting longevity and overall well-being."
# Initialize the Faithfulness metric
faithfulness_metrics = Faithfulness(llm_response=response, retrieval_context=retrieval_context)
# Create an evaluator with the selected metrics
evaluator = CustomEvaluator(metrics=[faithfulness_metrics], model=model)
# Evaluate the response
faithfulness_result = evaluator.judge()
# Output the evaluation result
print(faithfulness_result)
```
## Example Output
```json
{
"faithfulness": {
"claims": [
"The Mediterranean diet is known for its health benefits.",
"The Mediterranean diet reduces the risk of heart disease.",
"The Mediterranean diet reduces the risk of stroke.",
"The Mediterranean diet reduces the risk of diabetes.",
"The Mediterranean diet encourages the consumption of fruits.",
"The Mediterranean diet encourages the consumption of vegetables.",
"The Mediterranean diet encourages the consumption of whole grains.",
"The Mediterranean diet encourages the consumption of nuts.",
"The Mediterranean diet encourages the consumption of olive oil.",
"The Mediterranean diet limits red meat consumption.",
"The Mediterranean diet is associated with better cognitive function.",
"The Mediterranean diet is associated with a reduced risk of Alzheimer's disease.",
"The Mediterranean diet promotes longevity.",
"The Mediterranean diet promotes overall well-being."
],
"truths": [
"The Mediterranean diet is known for its health benefits.",
"The Mediterranean diet reduces the risk of heart disease, stroke, and diabetes.",
"The Mediterranean diet encourages the consumption of fruits, vegetables, whole grains, nuts, and olive oil.",
"The Mediterranean diet limits red meat consumption.",
"The Mediterranean diet has been associated with better cognitive function.",
"The Mediterranean diet has been associated with a reduced risk of Alzheimer's disease.",
"The Mediterranean diet promotes longevity and overall well-being."
],
"reason": "The score is 1.0 because the 'actual output' aligns perfectly with the information presented in the 'retrieval context', showcasing the health benefits, disease risk reduction, cognitive function improvement, and overall well-being promotion of the Mediterranean diet."
}
}
```
## Roadmap
We have an exciting roadmap planned for _IndoxJudge_:
| Plan |
| ------------------------------------------------------------------------- |
| Integration of additional metrics such as _Diversity_ and _Coherence_. |
| Introduction of a graphical user interface (_GUI_) for easier evaluation. |
| Expansion of the toolkit to support evaluation in multiple languages. |
| Release of a benchmarking suite for standardizing _LLM_ evaluations. |
## Contributing
We welcome contributions from the community! If you'd like to contribute, please fork the repository and create a pull request. For major changes, please open an issue first to discuss what you would like to change.
1. Fork the repository
2. Create a new branch (`git checkout -b feature-branch`)
3. Commit your changes (`git commit -am 'Add new feature'`)
4. Push to the branch (`git push origin feature-branch`)
5. Create a pull request
Raw data
{
"_id": null,
"home_page": "https://github.com/osllmai/inDoxJudge",
"name": "indoxJudge",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "Evaluation, Safety Check, LLM Comparison, RAG, LLM",
"author": "osllm",
"author_email": "ashkan@nematifamilyfundation.onmicrosoft.com",
"download_url": "https://files.pythonhosted.org/packages/d8/f8/34947aa9f887b4c9d7f10651af37453776009d27138b83f7488cb785fd79/indoxjudge-0.0.10.tar.gz",
"platform": null,
"description": "<p align=\"center\">\r\n\r\n<div style=\"position: relative; width: 100%; text-align: center;\">\r\n <h1>inDoxJudge</h1>\r\n <a href=\"https://github.com/osllmai/inDoxJudge\">\r\n<img src=\"https://readme-typing-svg.demolab.com?font=Georgia&size=16&duration=3000&pause=500&multiline=true&width=700&height=100&lines=InDoxJudge;LLM+Evaluation+%7C+RAG+Evaluation+%7C+Safety+Evaluation+%7C+LLM+Comparison;Copyright+\u00a9\ufe0f+OSLLAM.ai\" alt=\"Typing SVG\" style=\"margin-top: 20px;\"/>\r\n </a>\r\n</div>\r\n\r\n</br>\r\n\r\n[![License](https://img.shields.io/github/license/osllmai/inDox)](https://github.com/osllmai/inDox/blob/main/LICENSE)\r\n[![PyPI](https://badge.fury.io/py/indoxJudge.svg)](https://pypi.org/project/IndoxJudge/0.0.10/)\r\n[![Python](https://img.shields.io/pypi/pyversions/indoxJudge.svg)](https://pypi.org/project/indoxJudge/0.0.10/)\r\n[![Downloads](https://static.pepy.tech/badge/indoxJudge)](https://pepy.tech/project/indoxJudge)\r\n\r\n[![Discord](https://img.shields.io/discord/1223867382460579961?label=Discord&logo=Discord&style=social)](https://discord.com/invite/ossllmai)\r\n[![GitHub stars](https://img.shields.io/github/stars/osllmai/inDoxJudge?style=social)](https://github.com/osllmai/inDoxJudge)\r\n\r\n<p align=\"center\">\r\n <a href=\"https://osllm.ai\">Official Website</a> • <a href=\"https://docs.osllm.ai/index.html\">Documentation</a> • <a href=\"https://discord.gg/2fftQauwDD\">Discord</a>\r\n</p>\r\n\r\n<p align=\"center\">\r\n <b>NEW:</b> <a href=\"https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill\">Subscribe to our mailing list</a> for updates and news!\r\n</p>\r\n\r\nWelcome to _IndoxJudge_! This repository provides a comprehensive suite of evaluation metrics for assessing the performance and quality of large language models (_LLMs_). Whether you're a researcher, developer, or enthusiast, this toolkit offers essential tools to measure various aspects of _LLMs_, including _knowledge retention_, _bias_, _toxicity_, and more.\r\n\r\n<p align=\"center\">\r\n <img src=\"https://raw.githubusercontent.com/osllmai/indoxJudge/master/docs/assets/IndoxJudge%20Evaluate%20LLMs%20with%20metrics%20%26%20Model%20Safety.png\" alt=\"IndoxJudge Evaluate LLMs with metrics & Model Safety\">\r\n</p>\r\n\r\n## Overview\r\n\r\n_IndoxJudge_ is designed to provide a standardized and extensible framework for evaluating _LLMs_. With a focus on _accuracy_, _fairness_, and _relevancy_, this toolkit supports a wide range of evaluation metrics and is continuously updated to include the latest advancements in the field.\r\n\r\n## Features\r\n\r\n- **Comprehensive Metrics**: Evaluate _LLMs_ across multiple dimensions, including _accuracy_, _bias_, _toxicity_, and _contextual relevancy_.\r\n- **RAG Evaluation**: Includes specialized metrics for evaluating _retrieval-augmented generation (RAG)_ models.\r\n- **Safety Evaluation**: Assess the _safety_ of model outputs, focusing on _toxicity_, _bias_, and ethical considerations.\r\n- **Extensible Framework**: Easily integrate new metrics or customize existing ones to suit specific needs.\r\n- **User-Friendly Interface**: Intuitive and easy-to-use interface for seamless evaluation.\r\n- **Continuous Updates**: Regular updates to incorporate new metrics and improvements.\r\n\r\n## Supported Models\r\n\r\n_IndoxJudge_ currently supports the following _LLM_ models:\r\n\r\n- **OpenAi**\r\n- **GoogleAi**\r\n- **IndoxApi**\r\n- **HuggingFaceModel**\r\n- **Mistral**\r\n- **Pheonix** # Coming Soon - You may follow the progress [phoenix_cli](https://github.com/osllmai/phoenix_cli) or [phoenix](https://github.com/osllmai/phoenix)\r\n- **Ollama**\r\n\r\n## Metrics\r\n\r\n_IndoxJudge_ includes the following metrics, with more being added:\r\n\r\n- **GEval**: General evaluation metric for _LLMs_.\r\n- **KnowledgeRetention**: Assesses the ability of _LLMs_ to retain factual information.\r\n- **BertScore**: Measures the similarity between generated and reference sentences.\r\n- **Toxicity**: Evaluates the presence of toxic content in model outputs.\r\n- **Bias**: Analyzes the potential biases in _LLM_ outputs.\r\n- **Hallucination**: Identifies instances where the model generates false or misleading information.\r\n- **Faithfulness**: Checks the alignment of generated content with source material.\r\n- **ContextualRelevancy**: Assesses the relevance of responses in context.\r\n- **Rouge**: Measures the overlap of n-grams between generated and reference texts.\r\n- **BLEU**: Evaluates the quality of text generation based on precision.\r\n- **AnswerRelevancy**: Assesses the relevance of answers to questions.\r\n- **METEOR**: Evaluates machine translation quality.\r\n- **Gruen**: Measures the quality of generated text by assessing grammaticality, redundancy, and focus.\r\n- **Overallscore**: Provides an overall evaluation score for _LLMs_ which is a weighted average of multiple metrics.\r\n- **MCDA**: Multi-Criteria Decision Analysis for evaluating _LLMs_.\r\n\r\n## Installation\r\n\r\nTo install _IndoxJudge_, follow these steps:\r\n\r\n```bash\r\ngit clone https://github.com/yourusername/indoxjudge.git\r\ncd indoxjudge\r\n```\r\n\r\n## Setting Up the Python Environment\r\n\r\nIf you are running this project in your local IDE, please create a Python environment to ensure all dependencies are correctly managed. You can follow the steps below to set up a virtual environment named `indox_judge`:\r\n\r\n### Windows\r\n\r\n1. **Create the virtual environment:**\r\n\r\n```bash\r\npython -m venv indox_judge\r\n```\r\n\r\n2. **Activate the virtual environment:**\r\n\r\n```bash\r\nindox_judge\\Scripts\\activate\r\n```\r\n\r\n### macOS/Linux\r\n\r\n1. **Create the virtual environment:**\r\n ```bash\r\n python3 -m venv indox_judge\r\n ```\r\n\r\n````\r\n\r\n2. **Activate the virtual environment:**\r\n ```bash\r\n source indox_judge/bin/activate\r\n````\r\n\r\n### Install Dependencies\r\n\r\nOnce the virtual environment is activated, install the required dependencies by running:\r\n\r\n```bash\r\npip install -r requirements.txt\r\n```\r\n\r\n## Usage\r\n\r\nTo use _IndoxJudge_, load your API key, select the model, and choose the evaluation metrics. Here's an example demonstrating how to evaluate a model's response for _faithfulness_:\r\n\r\n```python\r\nimport os\r\nfrom dotenv import load_dotenv\r\n\r\n# Load environment variables\r\nload_dotenv()\r\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\r\n\r\n# Import IndoxJudge and supported models\r\nfrom indoxJudge.piplines import CustomEvaluator\r\nfrom indoxJudge.models import OpenAi\r\nfrom indoxJudge.metrics import Faithfulness\r\n\r\n# Initialize the model with your API key\r\nmodel = OpenAi(api_key=OPENAI_API_KEY,model=\"gpt-4o\")\r\n\r\n# Define your query and retrieval context\r\nquery = \"What are the benefits of a Mediterranean diet?\"\r\nretrieval_context = [\r\n \"The Mediterranean diet emphasizes eating primarily plant-based foods, such as fruits and vegetables, whole grains, legumes, and nuts. It also includes moderate amounts of fish and poultry, and low consumption of red meat. Olive oil is the main source of fat, providing monounsaturated fats which are beneficial for heart health.\",\r\n \"Research has shown that the Mediterranean diet can reduce the risk of heart disease, stroke, and type 2 diabetes. It is also associated with improved cognitive function and a lower risk of Alzheimer's disease. The diet's high content of fiber, antioxidants, and healthy fats contributes to its numerous health benefits.\",\r\n \"A Mediterranean diet has been linked to a longer lifespan and a reduced risk of chronic diseases. It promotes healthy aging and weight management due to its emphasis on whole, unprocessed foods and balanced nutrition.\"\r\n]\r\n\r\n# Obtain the model's response\r\nresponse = \"The Mediterranean diet is known for its health benefits, including reducing the risk of heart disease, stroke, and diabetes. It encourages the consumption of fruits, vegetables, whole grains, nuts, and olive oil, while limiting red meat. Additionally, this diet has been associated with better cognitive function and a reduced risk of Alzheimer's disease, promoting longevity and overall well-being.\"\r\n\r\n# Initialize the Faithfulness metric\r\nfaithfulness_metrics = Faithfulness(llm_response=response, retrieval_context=retrieval_context)\r\n\r\n# Create an evaluator with the selected metrics\r\nevaluator = CustomEvaluator(metrics=[faithfulness_metrics], model=model)\r\n\r\n# Evaluate the response\r\nfaithfulness_result = evaluator.judge()\r\n\r\n# Output the evaluation result\r\nprint(faithfulness_result)\r\n```\r\n\r\n## Example Output\r\n\r\n```json\r\n{\r\n \"faithfulness\": {\r\n \"claims\": [\r\n \"The Mediterranean diet is known for its health benefits.\",\r\n \"The Mediterranean diet reduces the risk of heart disease.\",\r\n \"The Mediterranean diet reduces the risk of stroke.\",\r\n \"The Mediterranean diet reduces the risk of diabetes.\",\r\n \"The Mediterranean diet encourages the consumption of fruits.\",\r\n \"The Mediterranean diet encourages the consumption of vegetables.\",\r\n \"The Mediterranean diet encourages the consumption of whole grains.\",\r\n \"The Mediterranean diet encourages the consumption of nuts.\",\r\n \"The Mediterranean diet encourages the consumption of olive oil.\",\r\n \"The Mediterranean diet limits red meat consumption.\",\r\n \"The Mediterranean diet is associated with better cognitive function.\",\r\n \"The Mediterranean diet is associated with a reduced risk of Alzheimer's disease.\",\r\n \"The Mediterranean diet promotes longevity.\",\r\n \"The Mediterranean diet promotes overall well-being.\"\r\n ],\r\n \"truths\": [\r\n \"The Mediterranean diet is known for its health benefits.\",\r\n \"The Mediterranean diet reduces the risk of heart disease, stroke, and diabetes.\",\r\n \"The Mediterranean diet encourages the consumption of fruits, vegetables, whole grains, nuts, and olive oil.\",\r\n \"The Mediterranean diet limits red meat consumption.\",\r\n \"The Mediterranean diet has been associated with better cognitive function.\",\r\n \"The Mediterranean diet has been associated with a reduced risk of Alzheimer's disease.\",\r\n \"The Mediterranean diet promotes longevity and overall well-being.\"\r\n ],\r\n \"reason\": \"The score is 1.0 because the 'actual output' aligns perfectly with the information presented in the 'retrieval context', showcasing the health benefits, disease risk reduction, cognitive function improvement, and overall well-being promotion of the Mediterranean diet.\"\r\n }\r\n}\r\n```\r\n\r\n## Roadmap\r\n\r\nWe have an exciting roadmap planned for _IndoxJudge_:\r\n\r\n| Plan |\r\n| ------------------------------------------------------------------------- |\r\n| Integration of additional metrics such as _Diversity_ and _Coherence_. |\r\n| Introduction of a graphical user interface (_GUI_) for easier evaluation. |\r\n| Expansion of the toolkit to support evaluation in multiple languages. |\r\n| Release of a benchmarking suite for standardizing _LLM_ evaluations. |\r\n\r\n## Contributing\r\n\r\nWe welcome contributions from the community! If you'd like to contribute, please fork the repository and create a pull request. For major changes, please open an issue first to discuss what you would like to change.\r\n\r\n1. Fork the repository\r\n2. Create a new branch (`git checkout -b feature-branch`)\r\n3. Commit your changes (`git commit -am 'Add new feature'`)\r\n4. Push to the branch (`git push origin feature-branch`)\r\n5. Create a pull request\r\n",
"bugtrack_url": null,
"license": "AGPL-3.0",
"summary": "Indox Judge",
"version": "0.0.10",
"project_urls": {
"Homepage": "https://github.com/osllmai/inDoxJudge"
},
"split_keywords": [
"evaluation",
" safety check",
" llm comparison",
" rag",
" llm"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "55549a34d5796dffb860d3a014b16b153646af7dde5ae9dd52bd05cfa239ffef",
"md5": "08e5accba17384b7ae2f3ab712c0cb7c",
"sha256": "6eed7ab81a27b525fae6e1fe13a1751b11e40b80620627d2accb69d45eb44f64"
},
"downloads": -1,
"filename": "indoxJudge-0.0.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "08e5accba17384b7ae2f3ab712c0cb7c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 329288,
"upload_time": "2024-11-17T07:56:41",
"upload_time_iso_8601": "2024-11-17T07:56:41.038275Z",
"url": "https://files.pythonhosted.org/packages/55/54/9a34d5796dffb860d3a014b16b153646af7dde5ae9dd52bd05cfa239ffef/indoxJudge-0.0.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d8f834947aa9f887b4c9d7f10651af37453776009d27138b83f7488cb785fd79",
"md5": "5a43e83d18b2c249ff08bdebf98f7627",
"sha256": "2a5a3b7f172ed1c6bf3d80478e74c90c9f31d844d750ae138c9d3ee6224cf623"
},
"downloads": -1,
"filename": "indoxjudge-0.0.10.tar.gz",
"has_sig": false,
"md5_digest": "5a43e83d18b2c249ff08bdebf98f7627",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 218757,
"upload_time": "2024-11-17T07:56:43",
"upload_time_iso_8601": "2024-11-17T07:56:43.905365Z",
"url": "https://files.pythonhosted.org/packages/d8/f8/34947aa9f887b4c9d7f10651af37453776009d27138b83f7488cb785fd79/indoxjudge-0.0.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-17 07:56:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "osllmai",
"github_project": "inDoxJudge",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "dash",
"specs": [
[
"==",
"2.17.1"
]
]
},
{
"name": "dash_bootstrap_components",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "dash_daq",
"specs": [
[
"==",
"0.5.0"
]
]
},
{
"name": "ipython",
"specs": [
[
"==",
"8.18.1"
]
]
},
{
"name": "loguru",
"specs": [
[
"==",
"0.7.2"
]
]
},
{
"name": "nltk",
"specs": [
[
"==",
"3.8.1"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.0.1"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.0.3"
]
]
},
{
"name": "plotly",
"specs": [
[
"==",
"5.23.0"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"5.27.3"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.8.2"
]
]
},
{
"name": "Requests",
"specs": [
[
"==",
"2.32.3"
]
]
},
{
"name": "tenacity",
"specs": [
[
"==",
"8.2.3"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.66.4"
]
]
},
{
"name": "scikit-criteria",
"specs": [
[
"==",
"1.4.2"
]
]
}
],
"lcname": "indoxjudge"
}