# Rouge
*A full Python librarie for the ROUGE metric [(paper)](http://www.aclweb.org/anthology/W04-1013).*
### Disclaimer
This implementation is independant from the "official" ROUGE script (aka. `ROUGE-155`).
Results may be *slighlty* different, see [discussions in #2](https://github.com/pltrdy/rouge/issues/2).
## Quickstart
#### Clone & Install
```shell
git clone https://github.com/pltrdy/rouge
cd rouge
python setup.py install
# or
pip install -U .
```
or from pip:
```
pip install rouge
```
#### Use it from the shell (JSON Output)
```
$rouge -h
usage: rouge [-h] [-f] [-a] hypothesis reference
Rouge Metric Calculator
positional arguments:
hypothesis Text of file path
reference Text or file path
optional arguments:
-h, --help show this help message and exit
-f, --file File mode
-a, --avg Average mode
```
e.g.
```shell
# Single Sentence
rouge "transcript is a written version of each day 's cnn student" \
"this page includes the show transcript use the transcript to help students with"
# Scoring using two files (line by line)
rouge -f ./tests/hyp.txt ./ref.txt
# Avg scoring - 2 files
rouge -f ./tests/hyp.txt ./ref.txt --avg
```
#### As a library
###### Score 1 sentence
```python
from rouge import Rouge
hypothesis = "the #### transcript is a written version of each day 's cnn student news program use this transcript to he lp students with reading comprehension and vocabulary use the weekly newsquiz to test your knowledge of storie s you saw on cnn student news"
reference = "this page includes the show transcript use the transcript to help students with reading comprehension and vocabulary at the bottom of the page , comment for a chance to be mentioned on cnn student news . you must be a teac her or a student age # # or older to request a mention on the cnn student news roll call . the weekly newsquiz tests students ' knowledge of even ts in the news"
rouge = Rouge()
scores = rouge.get_scores(hypothesis, reference)
```
*Output:*
```json
[
{
"rouge-1": {
"f": 0.4786324739396596,
"p": 0.6363636363636364,
"r": 0.3835616438356164
},
"rouge-2": {
"f": 0.2608695605353498,
"p": 0.3488372093023256,
"r": 0.20833333333333334
},
"rouge-l": {
"f": 0.44705881864636676,
"p": 0.5277777777777778,
"r": 0.3877551020408163
}
}
]
```
*Note: "f" stands for f1_score, "p" stands for precision, "r" stands for recall.*
###### Score multiple sentences
```python
import json
from rouge import Rouge
# Load some sentences
with open('./tests/data.json') as f:
data = json.load(f)
hyps, refs = map(list, zip(*[[d['hyp'], d['ref']] for d in data]))
rouge = Rouge()
scores = rouge.get_scores(hyps, refs)
# or
scores = rouge.get_scores(hyps, refs, avg=True)
```
*Output (`avg=False`)*: a list of `n` dicts:
```
{"rouge-1": {"f": _, "p": _, "r": _}, "rouge-2" : { .. }, "rouge-l": { ... }}
```
*Output (`avg=True`)*: a single dict with average values:
```
{"rouge-1": {"f": _, "p": _, "r": _}, "rouge-2" : { .. }, "rouge-l": { ... }}
```
###### Score two files (line by line)
Given two files `hyp_path`, `ref_path`, with the same number (`n`) of lines, calculate score for each of this lines, or, the average over the whole file.
```python
from rouge import FilesRouge
files_rouge = FilesRouge()
scores = files_rouge.get_scores(hyp_path, ref_path)
# or
scores = files_rouge.get_scores(hyp_path, ref_path, avg=True)
```
Raw data
{
"_id": null,
"home_page": "http://github.com/pltrdy/rouge",
"name": "rouge",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "NL,CL,natural language processing,computational linguistics,summarization",
"author": "pltrdy",
"author_email": "pltrdy@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/db/e4/3420a1ab1e82a280fb6107f7ae99e88eb12383c978fe573c0c64d0327d6b/rouge-1.0.1.tar.gz",
"platform": "",
"description": "# Rouge\n*A full Python librarie for the ROUGE metric [(paper)](http://www.aclweb.org/anthology/W04-1013).*\n\n### Disclaimer\nThis implementation is independant from the \"official\" ROUGE script (aka. `ROUGE-155`). \nResults may be *slighlty* different, see [discussions in #2](https://github.com/pltrdy/rouge/issues/2).\n\n## Quickstart\n#### Clone & Install\n```shell\ngit clone https://github.com/pltrdy/rouge\ncd rouge\npython setup.py install\n# or\npip install -U .\n```\nor from pip:\n```\npip install rouge\n```\n#### Use it from the shell (JSON Output)\n```\n$rouge -h\nusage: rouge [-h] [-f] [-a] hypothesis reference\n\nRouge Metric Calculator\n\npositional arguments:\n hypothesis Text of file path\n reference Text or file path\n\noptional arguments:\n -h, --help show this help message and exit\n -f, --file File mode\n -a, --avg Average mode\n\n```\n\ne.g. \n\n\n```shell\n# Single Sentence\nrouge \"transcript is a written version of each day 's cnn student\" \\\n \"this page includes the show transcript use the transcript to help students with\"\n\n# Scoring using two files (line by line)\nrouge -f ./tests/hyp.txt ./ref.txt\n\n# Avg scoring - 2 files\nrouge -f ./tests/hyp.txt ./ref.txt --avg\n```\n\n#### As a library\n\n###### Score 1 sentence\n\n```python\nfrom rouge import Rouge \n\nhypothesis = \"the #### transcript is a written version of each day 's cnn student news program use this transcript to he lp students with reading comprehension and vocabulary use the weekly newsquiz to test your knowledge of storie s you saw on cnn student news\"\n\nreference = \"this page includes the show transcript use the transcript to help students with reading comprehension and vocabulary at the bottom of the page , comment for a chance to be mentioned on cnn student news . you must be a teac her or a student age # # or older to request a mention on the cnn student news roll call . the weekly newsquiz tests students ' knowledge of even ts in the news\"\n\nrouge = Rouge()\nscores = rouge.get_scores(hypothesis, reference)\n```\n\n*Output:*\n\n```json\n[\n {\n \"rouge-1\": {\n \"f\": 0.4786324739396596,\n \"p\": 0.6363636363636364,\n \"r\": 0.3835616438356164\n },\n \"rouge-2\": {\n \"f\": 0.2608695605353498,\n \"p\": 0.3488372093023256,\n \"r\": 0.20833333333333334\n },\n \"rouge-l\": {\n \"f\": 0.44705881864636676,\n \"p\": 0.5277777777777778,\n \"r\": 0.3877551020408163\n }\n }\n]\n```\n\n*Note: \"f\" stands for f1_score, \"p\" stands for precision, \"r\" stands for recall.*\n\n###### Score multiple sentences\n```python\nimport json\nfrom rouge import Rouge\n\n# Load some sentences\nwith open('./tests/data.json') as f:\n data = json.load(f)\n\nhyps, refs = map(list, zip(*[[d['hyp'], d['ref']] for d in data]))\nrouge = Rouge()\nscores = rouge.get_scores(hyps, refs)\n# or\nscores = rouge.get_scores(hyps, refs, avg=True)\n```\n\n*Output (`avg=False`)*: a list of `n` dicts:\n\n```\n{\"rouge-1\": {\"f\": _, \"p\": _, \"r\": _}, \"rouge-2\" : { .. }, \"rouge-l\": { ... }}\n```\n\n\n*Output (`avg=True`)*: a single dict with average values:\n\n```\n{\"rouge-1\": {\"f\": _, \"p\": _, \"r\": _}, \"rouge-2\" : { .. \u00a0 \u00a0 }, \"rouge-l\": { ... }}\n``` \n\n###### Score two files (line by line)\nGiven two files `hyp_path`, `ref_path`, with the same number (`n`) of lines, calculate score for each of this lines, or, the average over the whole file. \n\n```python\nfrom rouge import FilesRouge\n\nfiles_rouge = FilesRouge()\nscores = files_rouge.get_scores(hyp_path, ref_path)\n# or\nscores = files_rouge.get_scores(hyp_path, ref_path, avg=True)\n```\n\n\n",
"bugtrack_url": null,
"license": "LICENCE.txt",
"summary": "Full Python ROUGE Score Implementation (not a wrapper)",
"version": "1.0.1",
"split_keywords": [
"nl",
"cl",
"natural language processing",
"computational linguistics",
"summarization"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "327c650ae86f92460e9e8ef969cc5008b24798dcf56a9a8947d04c78f550b3f5",
"md5": "f622a6b859eca46c4976a0abfc90e5d7",
"sha256": "28d118536e8c774dc47d1d15ec266479b4dd0914c4672ce117d4002789bdc644"
},
"downloads": -1,
"filename": "rouge-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f622a6b859eca46c4976a0abfc90e5d7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 13725,
"upload_time": "2021-07-20T08:45:54",
"upload_time_iso_8601": "2021-07-20T08:45:54.605845Z",
"url": "https://files.pythonhosted.org/packages/32/7c/650ae86f92460e9e8ef969cc5008b24798dcf56a9a8947d04c78f550b3f5/rouge-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "dbe43420a1ab1e82a280fb6107f7ae99e88eb12383c978fe573c0c64d0327d6b",
"md5": "0834ad067dddfd155da7de7e37c8fcab",
"sha256": "12b48346ca47d6bcf3c45061f315452b9ccec0620ee895ec85b7efc3d54aae34"
},
"downloads": -1,
"filename": "rouge-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "0834ad067dddfd155da7de7e37c8fcab",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 14292,
"upload_time": "2021-07-20T08:45:56",
"upload_time_iso_8601": "2021-07-20T08:45:56.530629Z",
"url": "https://files.pythonhosted.org/packages/db/e4/3420a1ab1e82a280fb6107f7ae99e88eb12383c978fe573c0c64d0327d6b/rouge-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2021-07-20 08:45:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "pltrdy",
"github_project": "rouge",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "rouge"
}