pycocoevalcap


Namepycocoevalcap JSON
Version 1.2 PyPI version JSON
download
home_pagehttps://github.com/salaniz/pycocoevalcap
SummaryMS-COCO Caption Evaluation for Python 3
upload_time2020-11-18 12:03:42
maintainersalaniz
docs_urlNone
author
requires_python>=3
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Microsoft COCO Caption Evaluation
===================

Evaluation codes for MS COCO caption generation.

## Description ##
This repository provides Python 3 support for the caption evaluation metrics used for the MS COCO dataset.

The code is derived from the original repository that supports Python 2.7: https://github.com/tylin/coco-caption.  
Caption evaluation depends on the COCO API that natively supports Python 3.

## Requirements ##
- Java 1.8.0
- Python 3

## Installation ##
To install pycocoevalcap and the pycocotools dependency (https://github.com/cocodataset/cocoapi), run:
```
pip install pycocoevalcap
```

## Usage ##
See the example script: [example/coco_eval_example.py](example/coco_eval_example.py)

## Files ##
./
- eval.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
- tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
- bleu: Bleu evalutation codes
- meteor: Meteor evaluation codes
- rouge: Rouge-L evaluation codes
- cider: CIDEr evaluation codes
- spice: SPICE evaluation codes

## Setup ##

- SPICE requires the download of [Stanford CoreNLP 3.6.0](http://stanfordnlp.github.io/CoreNLP/index.html) code and models. This will be done automatically the first time the SPICE evaluation is performed.
- Note: SPICE will try to create a cache of parsed sentences in ./spice/cache/. This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.

## References ##

- [Microsoft COCO Captions: Data Collection and Evaluation Server](http://arxiv.org/abs/1504.00325)
- PTBTokenizer: We use the [Stanford Tokenizer](http://nlp.stanford.edu/software/tokenizer.shtml) which is included in [Stanford CoreNLP 3.4.1](http://nlp.stanford.edu/software/corenlp.shtml).
- BLEU: [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf)
- Meteor: [Project page](http://www.cs.cmu.edu/~alavie/METEOR/) with related publications. We use the latest version (1.5) of the [Code](https://github.com/mjdenkowski/meteor). Changes have been made to the source code to properly aggreate the statistics for the entire corpus.
- Rouge-L: [ROUGE: A Package for Automatic Evaluation of Summaries](http://anthology.aclweb.org/W/W04/W04-1013.pdf)
- CIDEr: [CIDEr: Consensus-based Image Description Evaluation](http://arxiv.org/pdf/1411.5726.pdf)
- SPICE: [SPICE: Semantic Propositional Image Caption Evaluation](https://arxiv.org/abs/1607.08822)

## Developers ##
- Xinlei Chen (CMU)
- Hao Fang (University of Washington)
- Tsung-Yi Lin (Cornell)
- Ramakrishna Vedantam (Virgina Tech)

## Acknowledgement ##
- David Chiang (University of Norte Dame)
- Michael Denkowski (CMU)
- Alexander Rush (Harvard University)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/salaniz/pycocoevalcap",
    "name": "pycocoevalcap",
    "maintainer": "salaniz",
    "docs_url": null,
    "requires_python": ">=3",
    "maintainer_email": "",
    "keywords": "",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/ae/d7/6b77c7cddc3832ec4c551633c787aeeda168cc2e0ff173649ce145f1b85c/pycocoevalcap-1.2.tar.gz",
    "platform": "",
    "description": "Microsoft COCO Caption Evaluation\n===================\n\nEvaluation codes for MS COCO caption generation.\n\n## Description ##\nThis repository provides Python 3 support for the caption evaluation metrics used for the MS COCO dataset.\n\nThe code is derived from the original repository that supports Python 2.7: https://github.com/tylin/coco-caption.  \nCaption evaluation depends on the COCO API that natively supports Python 3.\n\n## Requirements ##\n- Java 1.8.0\n- Python 3\n\n## Installation ##\nTo install pycocoevalcap and the pycocotools dependency (https://github.com/cocodataset/cocoapi), run:\n```\npip install pycocoevalcap\n```\n\n## Usage ##\nSee the example script: [example/coco_eval_example.py](example/coco_eval_example.py)\n\n## Files ##\n./\n- eval.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.\n- tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer\n- bleu: Bleu evalutation codes\n- meteor: Meteor evaluation codes\n- rouge: Rouge-L evaluation codes\n- cider: CIDEr evaluation codes\n- spice: SPICE evaluation codes\n\n## Setup ##\n\n- SPICE requires the download of [Stanford CoreNLP 3.6.0](http://stanfordnlp.github.io/CoreNLP/index.html) code and models. This will be done automatically the first time the SPICE evaluation is performed.\n- Note: SPICE will try to create a cache of parsed sentences in ./spice/cache/. This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.\n\n## References ##\n\n- [Microsoft COCO Captions: Data Collection and Evaluation Server](http://arxiv.org/abs/1504.00325)\n- PTBTokenizer: We use the [Stanford Tokenizer](http://nlp.stanford.edu/software/tokenizer.shtml) which is included in [Stanford CoreNLP 3.4.1](http://nlp.stanford.edu/software/corenlp.shtml).\n- BLEU: [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf)\n- Meteor: [Project page](http://www.cs.cmu.edu/~alavie/METEOR/) with related publications. We use the latest version (1.5) of the [Code](https://github.com/mjdenkowski/meteor). Changes have been made to the source code to properly aggreate the statistics for the entire corpus.\n- Rouge-L: [ROUGE: A Package for Automatic Evaluation of Summaries](http://anthology.aclweb.org/W/W04/W04-1013.pdf)\n- CIDEr: [CIDEr: Consensus-based Image Description Evaluation](http://arxiv.org/pdf/1411.5726.pdf)\n- SPICE: [SPICE: Semantic Propositional Image Caption Evaluation](https://arxiv.org/abs/1607.08822)\n\n## Developers ##\n- Xinlei Chen (CMU)\n- Hao Fang (University of Washington)\n- Tsung-Yi Lin (Cornell)\n- Ramakrishna Vedantam (Virgina Tech)\n\n## Acknowledgement ##\n- David Chiang (University of Norte Dame)\n- Michael Denkowski (CMU)\n- Alexander Rush (Harvard University)\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "MS-COCO Caption Evaluation for Python 3",
    "version": "1.2",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "08f9466f289f1628296b5e368940f89e3cfcfb066d15ddc02ff536dc532b1c93",
                "md5": "14526e84cc463601a44f9e8536e2eff7",
                "sha256": "083ed7910f1aec000b0a237ef6665f74edf19954204d0b1cbdb8399ed132228d"
            },
            "downloads": -1,
            "filename": "pycocoevalcap-1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "14526e84cc463601a44f9e8536e2eff7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3",
            "size": 104312215,
            "upload_time": "2020-11-18T11:56:23",
            "upload_time_iso_8601": "2020-11-18T11:56:23.026174Z",
            "url": "https://files.pythonhosted.org/packages/08/f9/466f289f1628296b5e368940f89e3cfcfb066d15ddc02ff536dc532b1c93/pycocoevalcap-1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "aed76b77c7cddc3832ec4c551633c787aeeda168cc2e0ff173649ce145f1b85c",
                "md5": "0e36bfd9f50d767100ace969d995dc0d",
                "sha256": "7857f4d596ca2fa0b1a9a3c2067588a4257556077b7ad614d00b2b7b8f57cdde"
            },
            "downloads": -1,
            "filename": "pycocoevalcap-1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "0e36bfd9f50d767100ace969d995dc0d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3",
            "size": 104309308,
            "upload_time": "2020-11-18T12:03:42",
            "upload_time_iso_8601": "2020-11-18T12:03:42.138940Z",
            "url": "https://files.pythonhosted.org/packages/ae/d7/6b77c7cddc3832ec4c551633c787aeeda168cc2e0ff173649ce145f1b85c/pycocoevalcap-1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2020-11-18 12:03:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "salaniz",
    "github_project": "pycocoevalcap",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "pycocoevalcap"
}
        
Elapsed time: 0.04085s