## SMART: Sentences as Basic Units for Summary Evaluation
This directory contains tools for using SMART to evaluate texts produced
by systems, given the source document and the reference summaries.
Link to paper: https://arxiv.org/pdf/2208.01030.pdf
### Run SMART Evaluation
SMART can be run programmatically. For example:
```
matcher = matching_functions.chrf_matcher
smart_scorer = scorer.SmartScorer(matching_fn=matcher)
score = smart_scorer.smart_score(reference, candidate)
```
Here, `score` is a dictionary containing SMART (1/2/L) scores.
### Replicate SummEval results in the paper
You first need to download the necessary datasets:
1. [BARTScore data](https://github.com/neulab/BARTScore/tree/main/SUM/SummEval) (you need to unpickle and save it again as a json file)
2. [SummEval data](https://drive.google.com/file/d/1d2Iaz3jNraURP1i7CfTqPIj8REZMJ3tS/view)
You also need to download the precomputed scores for model-based matching functions (e.g., BLEURT, BERTScore, and T5-ANLI). In the terminal, follow the instructions and install [gsutil](https://cloud.google.com/storage/docs/gsutil_install). Then run:
```
gsutil cp -r gs://gresearch/SMART ./
```
Then, finally, run the following:
```
python summeval_experiments.py --bartscore_file=${BARTSCORE_PATH} --summeval_file=${SUMMEVAL_PATH} -- output_file=${OUTPUT_PATH}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/google-research/google-research/tree/master/smart_eval",
"name": "smart-eval",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Google LLC",
"author_email": "smart-eval-opensource@google.com",
"download_url": "https://files.pythonhosted.org/packages/ac/1d/3eb12d0d2a54a2160e5be58d3b83200dd4d088dffd80cca6f0accbb8f691/smart_eval-0.1.0.tar.gz",
"platform": null,
"description": "## SMART: Sentences as Basic Units for Summary Evaluation\n\nThis directory contains tools for using SMART to evaluate texts produced\nby systems, given the source document and the reference summaries.\n\nLink to paper: https://arxiv.org/pdf/2208.01030.pdf\n\n### Run SMART Evaluation\n\nSMART can be run programmatically. For example:\n\n```\nmatcher = matching_functions.chrf_matcher\nsmart_scorer = scorer.SmartScorer(matching_fn=matcher)\nscore = smart_scorer.smart_score(reference, candidate)\n```\n\nHere, `score` is a dictionary containing SMART (1/2/L) scores.\n\n### Replicate SummEval results in the paper\n\nYou first need to download the necessary datasets:\n1. [BARTScore data](https://github.com/neulab/BARTScore/tree/main/SUM/SummEval) (you need to unpickle and save it again as a json file)\n2. [SummEval data](https://drive.google.com/file/d/1d2Iaz3jNraURP1i7CfTqPIj8REZMJ3tS/view)\n\nYou also need to download the precomputed scores for model-based matching functions (e.g., BLEURT, BERTScore, and T5-ANLI). In the terminal, follow the instructions and install [gsutil](https://cloud.google.com/storage/docs/gsutil_install). Then run:\n\n```\ngsutil cp -r gs://gresearch/SMART ./\n```\n\nThen, finally, run the following:\n\n```\npython summeval_experiments.py --bartscore_file=${BARTSCORE_PATH} --summeval_file=${SUMMEVAL_PATH} -- output_file=${OUTPUT_PATH}\n```\n\n\n",
"bugtrack_url": null,
"license": "",
"summary": "Official implementation of SMART evaluation metric",
"version": "0.1.0",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8e5e96515b81a5dca1951745204703c9daf58dcba1bc1e2a69c1eaaacb8924a4",
"md5": "2a717aa364b77ae2dd39be41ff0a85d2",
"sha256": "7f4326e6a4c0f23b186afc6e89c9fcb8169d43b16e5b88fa8af859eebcb5069f"
},
"downloads": -1,
"filename": "smart_eval-0.1.0-py3.10.egg",
"has_sig": false,
"md5_digest": "2a717aa364b77ae2dd39be41ff0a85d2",
"packagetype": "bdist_egg",
"python_version": "0.1.0",
"requires_python": ">=3.7",
"size": 1901,
"upload_time": "2023-04-07T19:39:14",
"upload_time_iso_8601": "2023-04-07T19:39:14.746807Z",
"url": "https://files.pythonhosted.org/packages/8e/5e/96515b81a5dca1951745204703c9daf58dcba1bc1e2a69c1eaaacb8924a4/smart_eval-0.1.0-py3.10.egg",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ac1d3eb12d0d2a54a2160e5be58d3b83200dd4d088dffd80cca6f0accbb8f691",
"md5": "34a0f7c557fed6d5605c8be5cd48973c",
"sha256": "c6c93ec806d0369952a5b350e3848cd06b318502abfd6ff8c3ee4b90003b2d94"
},
"downloads": -1,
"filename": "smart_eval-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "34a0f7c557fed6d5605c8be5cd48973c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 2388,
"upload_time": "2023-04-07T19:39:17",
"upload_time_iso_8601": "2023-04-07T19:39:17.224095Z",
"url": "https://files.pythonhosted.org/packages/ac/1d/3eb12d0d2a54a2160e5be58d3b83200dd4d088dffd80cca6f0accbb8f691/smart_eval-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-04-07 19:39:17",
"github": false,
"gitlab": false,
"bitbucket": false,
"lcname": "smart-eval"
}