oarelatedworkevaluator


Nameoarelatedworkevaluator JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/KNOT-FIT-BUT/OARelatedWorkEvaluator
SummaryPackage for evaluation of OARelatedWork dataset.
upload_time2024-07-15 10:29:04
maintainerNone
docs_urlNone
authorMartin Dočekal
requires_python>=3.10
licenseThe Unlicense
keywords dataset oarelatedwork evaluation oarelatedwork dataset
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Evaluation
This folder contains evaluation scripts for experiments on OARelatedWork dataset.

## Install

    pip install oarelatedworkevaluator

## Usage

It can be run as:

    oarelatedworkevaluator toy_example/example.csv res.json

See help for more information.

## Format of related works

Each related work section is represented by following format.

Each headline is prefixed with appropriate number of `# ` according to its level and stands on own line. So, for example (## 2.2. Dependency treebank for other languages), is headline of the first level related work subsection that will be prefixed with `##`.

The headline (e.g., 2. Related work) of related work section itself is omitted.

Each paragraph is on its own line. Sentences are separated by space.

Formulas are masked with <eq>.

Citations have the following format:

    <cite>{'UNK' if bib_entry.id is None else bib_entry.id}<sep>{bib_entry.title}<sep>{first_author}</cite>

When citation has no bib_entry it is just:`<cite>UNK</cite>`. Similar if the first author is not known:`<cite>{'UNK' if bib_entry.id is None else bib_entry.id}<sep>{bib_entry.title}<sep>UNK</cite>`.

References use similar format as citations:

    <ref>type_of_ref_target</ref>

Thus, for figure it will be `<ref>figure</ref>`. When reference has unknown type it is just:`<ref>UNK</ref>`.

### Example

```
First paragraph of related work section.
## 2.1. headline of subsection
First sentence of first paragraph of subsection. Second sentence of first paragraph of subsection.
## 2.2. Graph Attention Networks
Recently, attention networks have achieved state-of-the-art results in many tasks <cite>4931429<sep>Show, attend and tell: Neural image caption generation with visual attention<sep>Kelvin Xu</cite>. By using learnable weights on each input, the attention mechanism determines how much attention to give to each input. GATs <cite>555880<sep>Graph attention networks<sep>Petar Veličković</cite> utilize an attention-based aggregator to generate attention coefficients over all neighbors of a node for feature aggregation. In particular, the aggregator function of GATs is
<eq>
Later, <cite>94675806<sep>How attentive are graph attention networks?<sep>Shaked Brody</cite> pointed out that ...
We also provide a comprehensive performance comparison in <ref>table</ref>.
```

## Format of results for evaluation
By default, it is expected that the results are stored as csv file with two fields **sample_id** and **summary** (you can also use **sequence** alias instead of **summary**). However, it is possible to use a different format with the --file_format argument.

There is also toy example results file `example.csv` in `toy_example` folder with oracle summary.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/KNOT-FIT-BUT/OARelatedWorkEvaluator",
    "name": "oarelatedworkevaluator",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "dataset, OARelatedWork evaluation, OARelatedWork dataset",
    "author": "Martin Do\u010dekal",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/8b/41/796035469ad837b5218d7b34879797585a46ee987849e635df8e27aa427d/oarelatedworkevaluator-1.0.0.tar.gz",
    "platform": null,
    "description": "# Evaluation\nThis folder contains evaluation scripts for experiments on OARelatedWork dataset.\n\n## Install\n\n    pip install oarelatedworkevaluator\n\n## Usage\n\nIt can be run as:\n\n    oarelatedworkevaluator toy_example/example.csv res.json\n\nSee help for more information.\n\n## Format of related works\n\nEach related work section is represented by following format.\n\nEach headline is prefixed with appropriate number of `# ` according to its level and stands on own line. So, for example (## 2.2. Dependency treebank for other languages), is headline of the first level related work subsection that will be prefixed with `##`.\n\nThe headline (e.g., 2. Related work) of related work section itself is omitted.\n\nEach paragraph is on its own line. Sentences are separated by space.\n\nFormulas are masked with <eq>.\n\nCitations have the following format:\n\n    <cite>{'UNK' if bib_entry.id is None else bib_entry.id}<sep>{bib_entry.title}<sep>{first_author}</cite>\n\nWhen citation has no bib_entry it is just:`<cite>UNK</cite>`. Similar if the first author is not known:`<cite>{'UNK' if bib_entry.id is None else bib_entry.id}<sep>{bib_entry.title}<sep>UNK</cite>`.\n\nReferences use similar format as citations:\n\n    <ref>type_of_ref_target</ref>\n\nThus, for figure it will be `<ref>figure</ref>`. When reference has unknown type it is just:`<ref>UNK</ref>`.\n\n### Example\n\n```\nFirst paragraph of related work section.\n## 2.1. headline of subsection\nFirst sentence of first paragraph of subsection. Second sentence of first paragraph of subsection.\n## 2.2. Graph Attention Networks\nRecently, attention networks have achieved state-of-the-art results in many tasks <cite>4931429<sep>Show, attend and tell: Neural image caption generation with visual attention<sep>Kelvin Xu</cite>. By using learnable weights on each input, the attention mechanism determines how much attention to give to each input. GATs <cite>555880<sep>Graph attention networks<sep>Petar Veli\u010dkovi\u0107</cite> utilize an attention-based aggregator to generate attention coefficients over all neighbors of a node for feature aggregation. In particular, the aggregator function of GATs is\n<eq>\nLater, <cite>94675806<sep>How attentive are graph attention networks?<sep>Shaked Brody</cite> pointed out that ...\nWe also provide a comprehensive performance comparison in <ref>table</ref>.\n```\n\n## Format of results for evaluation\nBy default, it is expected that the results are stored as csv file with two fields **sample_id** and **summary** (you can also use **sequence** alias instead of **summary**). However, it is possible to use a different format with the --file_format argument.\n\nThere is also toy example results file `example.csv` in `toy_example` folder with oracle summary.\n",
    "bugtrack_url": null,
    "license": "The Unlicense",
    "summary": "Package for evaluation of OARelatedWork dataset.",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/KNOT-FIT-BUT/OARelatedWorkEvaluator"
    },
    "split_keywords": [
        "dataset",
        " oarelatedwork evaluation",
        " oarelatedwork dataset"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1bd014e0c9b8927fa2396226e33de2ebbed545057e51c54ea44b2716edc0f9e6",
                "md5": "aa98c1639f4dc5de972767d989fcf6ac",
                "sha256": "7e72b5080ba38497499b9e62071ee5438eb10f6854e2e0ffd9f687d7c8dd5fcb"
            },
            "downloads": -1,
            "filename": "oarelatedworkevaluator-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "aa98c1639f4dc5de972767d989fcf6ac",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 38856,
            "upload_time": "2024-07-15T10:29:02",
            "upload_time_iso_8601": "2024-07-15T10:29:02.718308Z",
            "url": "https://files.pythonhosted.org/packages/1b/d0/14e0c9b8927fa2396226e33de2ebbed545057e51c54ea44b2716edc0f9e6/oarelatedworkevaluator-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8b41796035469ad837b5218d7b34879797585a46ee987849e635df8e27aa427d",
                "md5": "8111489647bcf280aed6382866cb92ad",
                "sha256": "eed02fe169d8269c9360fda5443d84b6cac3991ec494a4f1cf7a3eac8b0d7e98"
            },
            "downloads": -1,
            "filename": "oarelatedworkevaluator-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "8111489647bcf280aed6382866cb92ad",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 31667,
            "upload_time": "2024-07-15T10:29:04",
            "upload_time_iso_8601": "2024-07-15T10:29:04.408252Z",
            "url": "https://files.pythonhosted.org/packages/8b/41/796035469ad837b5218d7b34879797585a46ee987849e635df8e27aa427d/oarelatedworkevaluator-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-15 10:29:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "KNOT-FIT-BUT",
    "github_project": "OARelatedWorkEvaluator",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "oarelatedworkevaluator"
}
        
Elapsed time: 9.78040s