dottxt-eval-datasets


Namedottxt-eval-datasets JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryStandard datasets for doteval LLM evaluations
upload_time2025-07-31 08:43:48
maintainerNone
docs_urlNone
author.txt
requires_python>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # doteval-datasets

Standard datasets for [doteval](https://github.com/dottxt-ai/doteval) LLM evaluations.

## Installation

```bash
pip install doteval-datasets
```

## Usage

Once installed, the datasets are automatically available in doteval:

```python
from doteval import foreach

@foreach.bfcl("simple")
def eval_bfcl(question: str, schema: list, answer: list):
    # Your evaluation logic here
    pass

@foreach.gsm8k("test")
def eval_gsm8k(question: str, reasoning: str, answer: str):
    # Your evaluation logic here
    pass

@foreach.sroie("test") 
def eval_sroie(image: Image, entities: dict):
    # Your evaluation logic here
    pass
```

## Available Datasets

- **BFCL** (Berkeley Function Calling Leaderboard): Tests function calling capabilities
  - Variants: `simple`, `multiple`, `parallel`
  - Columns: `question`, `schema`, `answer`

- **GSM8K**: Grade school math word problems
  - Splits: `train`, `test`
  - Columns: `question`, `reasoning`, `answer`

- **SROIE**: Scanned receipts OCR and information extraction
  - Splits: `train`, `test`
  - Columns: `image`, `entities`

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "dottxt-eval-datasets",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": ".txt",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/56/c4/c82ebf13d3244728c18008c5788c0c60e71aad75cf956d9b9eb2d25b0fb5/dottxt_eval_datasets-0.1.0.tar.gz",
    "platform": null,
    "description": "# doteval-datasets\n\nStandard datasets for [doteval](https://github.com/dottxt-ai/doteval) LLM evaluations.\n\n## Installation\n\n```bash\npip install doteval-datasets\n```\n\n## Usage\n\nOnce installed, the datasets are automatically available in doteval:\n\n```python\nfrom doteval import foreach\n\n@foreach.bfcl(\"simple\")\ndef eval_bfcl(question: str, schema: list, answer: list):\n    # Your evaluation logic here\n    pass\n\n@foreach.gsm8k(\"test\")\ndef eval_gsm8k(question: str, reasoning: str, answer: str):\n    # Your evaluation logic here\n    pass\n\n@foreach.sroie(\"test\") \ndef eval_sroie(image: Image, entities: dict):\n    # Your evaluation logic here\n    pass\n```\n\n## Available Datasets\n\n- **BFCL** (Berkeley Function Calling Leaderboard): Tests function calling capabilities\n  - Variants: `simple`, `multiple`, `parallel`\n  - Columns: `question`, `schema`, `answer`\n\n- **GSM8K**: Grade school math word problems\n  - Splits: `train`, `test`\n  - Columns: `question`, `reasoning`, `answer`\n\n- **SROIE**: Scanned receipts OCR and information extraction\n  - Splits: `train`, `test`\n  - Columns: `image`, `entities`\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Standard datasets for doteval LLM evaluations",
    "version": "0.1.0",
    "project_urls": {
        "repository": "https://github.com/dottxt-ai/doteval-datasets"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "faabd142f7c067ac40b448f230fad32fa8da60daee9565f2007100f22cbf8d0a",
                "md5": "93f079efaed337f9f0efe6efef77856b",
                "sha256": "41cfa87d6cbd1ac27b5c94900c223ad191af799789df4126f4db97ab65040f36"
            },
            "downloads": -1,
            "filename": "dottxt_eval_datasets-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "93f079efaed337f9f0efe6efef77856b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 6689,
            "upload_time": "2025-07-31T08:43:47",
            "upload_time_iso_8601": "2025-07-31T08:43:47.987952Z",
            "url": "https://files.pythonhosted.org/packages/fa/ab/d142f7c067ac40b448f230fad32fa8da60daee9565f2007100f22cbf8d0a/dottxt_eval_datasets-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "56c4c82ebf13d3244728c18008c5788c0c60e71aad75cf956d9b9eb2d25b0fb5",
                "md5": "9df033db6c4810ab27b575229e8fa8c4",
                "sha256": "fddbf07af0b14cd918bfd2dbed3e90d63f555e691a6084d475750fa904453aa8"
            },
            "downloads": -1,
            "filename": "dottxt_eval_datasets-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "9df033db6c4810ab27b575229e8fa8c4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 11267,
            "upload_time": "2025-07-31T08:43:48",
            "upload_time_iso_8601": "2025-07-31T08:43:48.868731Z",
            "url": "https://files.pythonhosted.org/packages/56/c4/c82ebf13d3244728c18008c5788c0c60e71aad75cf956d9b9eb2d25b0fb5/dottxt_eval_datasets-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-31 08:43:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dottxt-ai",
    "github_project": "doteval-datasets",
    "github_not_found": true,
    "lcname": "dottxt-eval-datasets"
}
        
Elapsed time: 1.98489s