zensols.amr-coref


Namezensols.amr-coref JSON
Version 0.0.1 PyPI version JSON
download
home_pagehttps://github.com/plandes/amr_coref
SummaryA python library / model for creating co-references between AMR graph nodes.
upload_time2023-11-12 01:31:30
maintainer
docs_urlNone
authorPaul Landes
requires_python
license
keywords tooling
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # amr_coref

**A python library / model for creating co-references between AMR graph nodes.**

## About
amr_coref is a python library and trained model designed to do co-referencing
between [Abstract Meaning Representation](https://amr.isi.edu/) graphs.

The project follows the general approach of the [neuralcoref project](https://github.com/huggingface/neuralcoref)
and it's excellent
[blog on the co-referencing](https://medium.com/huggingface/how-to-train-a-neural-coreference-model-neuralcoref-2-7bb30c1abdfe).
However, the model is trained to do direct co-reference resolution between graph nodes and does not depend on
the sentences the graphs were created from.

The trained model achieves the following scores
```
MUC   :  R=0.647  P=0.779  F₁=0.706
B³    :  R=0.633  P=0.638  F₁=0.630
CEAF_m:  R=0.515  P=0.744  F₁=0.609
CEAF_e:  R=0.200  P=0.734  F₁=0.306
BLANC :  R=0.524  P=0.799  F₁=0.542
CoNLL-2012 average score: 0.548
```

## Project Status
**!! The following papers have GitHub projects/code that are better scoring and may be a preferable solution.**
See the uploaded file in [#1](https://github.com/bjascob/amr_coref/issues/1) for a quick view of scores.
* [VGAE as Cheap Supervision for AMR Coreference Resolution](https://github.com/IreneZihuiLi/VG-AMRCoref)
* [End-to-end AMR Coreference Resolution](https://github.com/Sean-Blank/AMRcoref)

This is a fork of [Brad Jascob's](https://github.com/bjascob/amr_coref)
`amr_coref` repository, and modified to address the multiprocessing issues on
non-Debian style OSs.  See [#3](https://github.com/bjascob/amr_coref/issues/3)
for details on the issue.


## Installation and usage
There is currently no pip installation. To use the library, simply clone the code and use it in place.

The pre-trained model can be downloaded from the assets section in [releases](https://github.com/bjascob/amr_coref/releases).

To use the model create a `data` directory and un-tar the model in it.

The script `40_Run_Inference.py`, is an example of how to use the model.


## Training
If you'd like to train the model from scratch, you'll need a copy of the
[AMR corpus](https://catalog.ldc.upenn.edu/LDC2020T02).
To complete training, run the scripts in order.
- 10_Build_Model_TData.py
- 12_Build_Embeddings.py
- 14_Build_Mention_Tokens.py
- 30_Train_Model.py.

You'll need `amr_annotation_3.0` and `GloVe/glove.6B.50d.txt` in your `data` directory

The first few scripts will create the training data in `data/tdata` and the model training
script will create `data/model`. Training takes less than 4 hours.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/plandes/amr_coref",
    "name": "zensols.amr-coref",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "tooling",
    "author": "Paul Landes",
    "author_email": "landes@mailc.net",
    "download_url": "https://github.com/plandes/amr_coref/releases/download/v0.0.1/zensols.amr_coref-0.0.1-py3-none-any.whl",
    "platform": null,
    "description": "# amr_coref\n\n**A python library / model for creating co-references between AMR graph nodes.**\n\n## About\namr_coref is a python library and trained model designed to do co-referencing\nbetween [Abstract Meaning Representation](https://amr.isi.edu/) graphs.\n\nThe project follows the general approach of the [neuralcoref project](https://github.com/huggingface/neuralcoref)\nand it's excellent\n[blog on the co-referencing](https://medium.com/huggingface/how-to-train-a-neural-coreference-model-neuralcoref-2-7bb30c1abdfe).\nHowever, the model is trained to do direct co-reference resolution between graph nodes and does not depend on\nthe sentences the graphs were created from.\n\nThe trained model achieves the following scores\n```\nMUC   :  R=0.647  P=0.779  F\u2081=0.706\nB\u00b3    :  R=0.633  P=0.638  F\u2081=0.630\nCEAF_m:  R=0.515  P=0.744  F\u2081=0.609\nCEAF_e:  R=0.200  P=0.734  F\u2081=0.306\nBLANC :  R=0.524  P=0.799  F\u2081=0.542\nCoNLL-2012 average score: 0.548\n```\n\n## Project Status\n**!! The following papers have GitHub projects/code that are better scoring and may be a preferable solution.**\nSee the uploaded file in [#1](https://github.com/bjascob/amr_coref/issues/1) for a quick view of scores.\n* [VGAE as Cheap Supervision for AMR Coreference Resolution](https://github.com/IreneZihuiLi/VG-AMRCoref)\n* [End-to-end AMR Coreference Resolution](https://github.com/Sean-Blank/AMRcoref)\n\nThis is a fork of [Brad Jascob's](https://github.com/bjascob/amr_coref)\n`amr_coref` repository, and modified to address the multiprocessing issues on\nnon-Debian style OSs.  See [#3](https://github.com/bjascob/amr_coref/issues/3)\nfor details on the issue.\n\n\n## Installation and usage\nThere is currently no pip installation. To use the library, simply clone the code and use it in place.\n\nThe pre-trained model can be downloaded from the assets section in [releases](https://github.com/bjascob/amr_coref/releases).\n\nTo use the model create a `data` directory and un-tar the model in it.\n\nThe script `40_Run_Inference.py`, is an example of how to use the model.\n\n\n## Training\nIf you'd like to train the model from scratch, you'll need a copy of the\n[AMR corpus](https://catalog.ldc.upenn.edu/LDC2020T02).\nTo complete training, run the scripts in order.\n- 10_Build_Model_TData.py\n- 12_Build_Embeddings.py\n- 14_Build_Mention_Tokens.py\n- 30_Train_Model.py.\n\nYou'll need `amr_annotation_3.0` and `GloVe/glove.6B.50d.txt` in your `data` directory\n\nThe first few scripts will create the training data in `data/tdata` and the model training\nscript will create `data/model`. Training takes less than 4 hours.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A python library / model for creating co-references between AMR graph nodes.",
    "version": "0.0.1",
    "project_urls": {
        "Download": "https://github.com/plandes/amr_coref/releases/download/v0.0.1/zensols.amr_coref-0.0.1-py3-none-any.whl",
        "Homepage": "https://github.com/plandes/amr_coref"
    },
    "split_keywords": [
        "tooling"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c325f71295cf825609858ab49c92c52797ce242a95e3482f72dc5be98127a584",
                "md5": "7e48944c31d7dc1ed352321bcc9a31be",
                "sha256": "a33b53170ac76494e0816523a77da4d412d6c7be620b9a5d82f1724fe03d3ac8"
            },
            "downloads": -1,
            "filename": "zensols.amr_coref-0.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7e48944c31d7dc1ed352321bcc9a31be",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 53170,
            "upload_time": "2023-11-12T01:31:30",
            "upload_time_iso_8601": "2023-11-12T01:31:30.933358Z",
            "url": "https://files.pythonhosted.org/packages/c3/25/f71295cf825609858ab49c92c52797ce242a95e3482f72dc5be98127a584/zensols.amr_coref-0.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-12 01:31:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "plandes",
    "github_project": "amr_coref",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "zensols.amr-coref"
}
        
Elapsed time: 0.22598s