finetune-eval


Namefinetune-eval JSON
Version 0.6.0.dev1 PyPI version JSON
download
home_page
SummaryFinetune_Eval_Harness
upload_time2023-05-15 13:32:31
maintainer
docs_urlNone
authorDFKI Berlin
requires_python>=3.7.0
license
keywords deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Finetune-Evaluation-Harness

![Build Status](https://github.com/malteos/finetune-evaluation-harness/actions/workflows/coverage_eval.yml/badge.svg)
![Build Status](https://github.com/malteos/finetune-evaluation-harness/actions/workflows/pull_request.yml/badge.svg)


## Overview
This project is a unified framework for evaluation of various LLMs on a large number of different evaluation tasks. Some of the features of this framework:

- Different types of tasks supported: Classification, NER tagging, Question-Answering
- Support for parameter efficient tuning (PEFT)
- Running mutliple tasks altogether


## Basic Usage

To evaluate a model (eg GERMAN-BERT) on task, please use something like this:

```
python main.py --model_name_or_path bert-base-german-cased \
--task_list germeval2018 \
--results_logging_dir /sample/directory/results \
--output_dir /sample/directory
```

This framework is build on top of Huggingface, hence all the keyword arguments used in regular HF transformers library work here as well: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py.


## Some Important Arguments

```
--model_name_or_path MODEL_NAME_OR_PATH
    Path to pretrained model or model identifier from huggingface.co/models (default: None)

--task_list TASK_LIST [TASK_LIST ...]
    List of tasks passed in order. (default: None) eg germeval2018, germeval2017, gnad10, german_europarl

--results_logging_dir RESULTS_LOGGING_DIR
   Where do you want to save the results of the run as a json file (default: None)

--output_dir OUTPUT_DIR
	The output directory where the model predictions and checkpoints will be written. (default: None)

--num_train_epochs NUM_TRAIN_EPOCHS
    Total number of training epochs to perform. (default: 1.0)

--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE
    Batch size per GPU/TPU core/CPU for training. (default: 8)

--use_fast_tokenizer [USE_FAST_TOKENIZER]
    Whether to use one of the fast tokenizer (backed by the tokenizers library) or not. (default: True)

```

If you fail to understand what any of the paramater does, --help is your friend.

## List of Supported Tasks

- GNAD10 (de) https://huggingface.co/datasets/gnad10
- GermEval 2017 (de) https://huggingface.co/datasets/malteos/germeval2017
- German Europarl (de) https://huggingface.co/datasets/akash418/german_europarl
- GermEval 2018 (de) https://huggingface.co/datasets/philschmid/germeval18
- German XQUAD (de) https://huggingface.co/datasets/deepset/germanquad


## Implementing New Tasks

To implement a new task in eval harness, see [this guide](./docs/task_guide.md).


## Evaluating the Coverage of the Current Code
Please go to Github Actions sections of this repository and start the build named "Evaluate", this would check if the coverage on existing code is more than 80%. The build
status is also visible on the main repo page.

## Guidelines On Running Tasks
- In some instances for specific tasks, please make sure to specify the exact dataset config depending on your needs
- If text sequence processing fails for some classification, please try with setting --use_fast_tokenizer as False


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "finetune-eval",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.0",
    "maintainer_email": "",
    "keywords": "deep learning",
    "author": "DFKI Berlin",
    "author_email": "akga01@dfki.de",
    "download_url": "https://files.pythonhosted.org/packages/41/57/59a49286049de9387fb04d17157cb6fd484aceca0d1089ab0803f33d30e6/finetune_eval-0.6.0.dev1.tar.gz",
    "platform": null,
    "description": "# Finetune-Evaluation-Harness\n\n![Build Status](https://github.com/malteos/finetune-evaluation-harness/actions/workflows/coverage_eval.yml/badge.svg)\n![Build Status](https://github.com/malteos/finetune-evaluation-harness/actions/workflows/pull_request.yml/badge.svg)\n\n\n## Overview\nThis project is a unified framework for evaluation of various LLMs on a large number of different evaluation tasks. Some of the features of this framework:\n\n- Different types of tasks supported: Classification, NER tagging, Question-Answering\n- Support for parameter efficient tuning (PEFT)\n- Running mutliple tasks altogether\n\n\n## Basic Usage\n\nTo evaluate a model (eg GERMAN-BERT) on task, please use something like this:\n\n```\npython main.py --model_name_or_path bert-base-german-cased \\\n--task_list germeval2018 \\\n--results_logging_dir /sample/directory/results \\\n--output_dir /sample/directory\n```\n\nThis framework is build on top of Huggingface, hence all the keyword arguments used in regular HF transformers library work here as well: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py.\n\n\n## Some Important Arguments\n\n```\n--model_name_or_path MODEL_NAME_OR_PATH\n    Path to pretrained model or model identifier from huggingface.co/models (default: None)\n\n--task_list TASK_LIST [TASK_LIST ...]\n    List of tasks passed in order. (default: None) eg germeval2018, germeval2017, gnad10, german_europarl\n\n--results_logging_dir RESULTS_LOGGING_DIR\n   Where do you want to save the results of the run as a json file (default: None)\n\n--output_dir OUTPUT_DIR\n\tThe output directory where the model predictions and checkpoints will be written. (default: None)\n\n--num_train_epochs NUM_TRAIN_EPOCHS\n    Total number of training epochs to perform. (default: 1.0)\n\n--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE\n    Batch size per GPU/TPU core/CPU for training. (default: 8)\n\n--use_fast_tokenizer [USE_FAST_TOKENIZER]\n    Whether to use one of the fast tokenizer (backed by the tokenizers library) or not. (default: True)\n\n```\n\nIf you fail to understand what any of the paramater does, --help is your friend.\n\n## List of Supported Tasks\n\n- GNAD10 (de) https://huggingface.co/datasets/gnad10\n- GermEval 2017 (de) https://huggingface.co/datasets/malteos/germeval2017\n- German Europarl (de) https://huggingface.co/datasets/akash418/german_europarl\n- GermEval 2018 (de) https://huggingface.co/datasets/philschmid/germeval18\n- German XQUAD (de) https://huggingface.co/datasets/deepset/germanquad\n\n\n## Implementing New Tasks\n\nTo implement a new task in eval harness, see [this guide](./docs/task_guide.md).\n\n\n## Evaluating the Coverage of the Current Code\nPlease go to Github Actions sections of this repository and start the build named \"Evaluate\", this would check if the coverage on existing code is more than 80%. The build\nstatus is also visible on the main repo page.\n\n## Guidelines On Running Tasks\n- In some instances for specific tasks, please make sure to specify the exact dataset config depending on your needs\n- If text sequence processing fails for some classification, please try with setting --use_fast_tokenizer as False\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Finetune_Eval_Harness",
    "version": "0.6.0.dev1",
    "project_urls": null,
    "split_keywords": [
        "deep",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "415759a49286049de9387fb04d17157cb6fd484aceca0d1089ab0803f33d30e6",
                "md5": "41cd992c22c25399278d97d2006217a1",
                "sha256": "899e21400577fefe00dd856b502a12644766d92ca221e7dec6184a39b3fe65c7"
            },
            "downloads": -1,
            "filename": "finetune_eval-0.6.0.dev1.tar.gz",
            "has_sig": false,
            "md5_digest": "41cd992c22c25399278d97d2006217a1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.0",
            "size": 44278,
            "upload_time": "2023-05-15T13:32:31",
            "upload_time_iso_8601": "2023-05-15T13:32:31.763737Z",
            "url": "https://files.pythonhosted.org/packages/41/57/59a49286049de9387fb04d17157cb6fd484aceca0d1089ab0803f33d30e6/finetune_eval-0.6.0.dev1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-15 13:32:31",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "finetune-eval"
}
        
Elapsed time: 0.06545s