weak-nlp


Nameweak-nlp JSON
Version 0.0.13 PyPI version JSON
download
home_pagehttps://github.com/code-kern-ai/weak-nlp
SummaryIntelligent information integration based on weak supervision
upload_time2023-10-10 09:28:29
maintainer
docs_urlNone
authorJohannes Hötter
requires_python
license
keywords kern.ai machine learning supervised learning python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![](weak-nlp.png)

# 🔮 weak-nlp
Intelligent information integration based on weak supervision
[![Python 3.9](https://img.shields.io/badge/python-3.9-blue.svg)](https://www.python.org/downloads/release/python-390/)
[![pypi 0.0.13](https://img.shields.io/badge/pypi-0.0.13-yellow.svg)](https://pypi.org/project/weak-nlp/0.0.13/)

## Installation
You can set up this library via either running `$ pip install weak-nlp`, or via cloning this repository and running `$ pip install -r requirements.txt` in your repository.

A sample installation would be:
```
$ conda create --name weak-nlp python=3.9
$ conda activate weak-nlp
$ pip install weak-nlp
```

## Usage
The library consists of three main entities:
- **Associations**: an association contains the information of one record <> label mapping. This does not have to be ground truth label for a given record, but can also come from e.g. a labelfunction (see below for an example).
- **Source vectors**: A source vector combines the created associations from one logical source. Additionally, it marks whether the respective source vector can be seen as a reference vector, such as a manually labeled source vector containing the *true* record <> label mappings.
- **Noisy label matrices**: Collection of source vectors that can be analyzed w.r.t. quality metrics (such as the confusion matrix, i.e., true positives etc.), quantity metrics (intersections and conflicts) or weakly supervisable labels.

The following is an example for building a noisy label matrix for a classification task
```python
import weak_nlp

def contains_keywords(text):
    if any(term in text for term in ["val1", "val2", "val3"]):
        return "regular"

texts = [...]

lf_associations = []
for text_id, text in enumerate(texts):
    label = contains_keywords(text)
    if label is not None:
        association = weak_nlp.ClassificationAssociation(text_id + 1, label)
        lf_associations.append(association)

lf_vector = weak_nlp.SourceVector(contains_keywords.__name__, False, lf_associations)

ground_truths = [
    weak_nlp.ClassificationAssociation(1, "clickbait"),
    weak_nlp.ClassificationAssociation(2, "regular"),
    weak_nlp.ClassificationAssociation(3, "regular")
]

gt_vector = weak_nlp.SourceVector("ground_truths", True, ground_truths)

cnlm = weak_nlp.CNLM([gt_vector, lf_vector])
```

Whereas for extraction tasks, your code snippet could look as follows:
```python
import weak_nlp

def match_keywords(text):
    for idx, token in enumerate(text.split()):
        if token in ["val1", "val2", "val3"]:
            yield "person", idx, idx+1 # label, from_idx, to_idx

texts = [...]

lf_associations = []
for text_id, text in enumerate(texts):
    for triplet in match_keywords(text):
        label, from_idx, to_idx = triplet
        association = weak_nlp.ExtractionAssociation(text_id + 1, label, from_idx, to_idx)
        lf_associations.append(association)

lf_vector = weak_nlp.SourceVector(match_keywords.__name__, False, lf_associations)

ground_truths = [
    weak_nlp.ExtractionAssociation(1, "person", 1, 2),
    weak_nlp.ExtractionAssociation(2, "person", 4, 5),
]

gt_vector = weak_nlp.SourceVector("ground_truths", True, ground_truths)

enlm = weak_nlp.ENLM([gt_vector, lf_vector])
```

## Roadmap
If you want to have something added, feel free to open an [issue](https://github.com/code-kern-ai/weak-nlp/issues).

## Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!

1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

And please don't forget to leave a ⭐ if you like the work! 

## License
Distributed under the Apache 2.0 License. See LICENSE.txt for more information.

## Contact
This library is developed and maintained by [kern.ai](https://github.com/code-kern-ai). If you want to provide us with feedback or have some questions, don't hesitate to contact us. We're super happy to help ✌️



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/code-kern-ai/weak-nlp",
    "name": "weak-nlp",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "kern.ai,machine learning,supervised learning,python",
    "author": "Johannes H\u00f6tter",
    "author_email": "johannes.hoetter@kern.ai",
    "download_url": "",
    "platform": null,
    "description": "![](weak-nlp.png)\n\n# \ud83d\udd2e weak-nlp\nIntelligent information integration based on weak supervision\n[![Python 3.9](https://img.shields.io/badge/python-3.9-blue.svg)](https://www.python.org/downloads/release/python-390/)\n[![pypi 0.0.13](https://img.shields.io/badge/pypi-0.0.13-yellow.svg)](https://pypi.org/project/weak-nlp/0.0.13/)\n\n## Installation\nYou can set up this library via either running `$ pip install weak-nlp`, or via cloning this repository and running `$ pip install -r requirements.txt` in your repository.\n\nA sample installation would be:\n```\n$ conda create --name weak-nlp python=3.9\n$ conda activate weak-nlp\n$ pip install weak-nlp\n```\n\n## Usage\nThe library consists of three main entities:\n- **Associations**: an association contains the information of one record <> label mapping. This does not have to be ground truth label for a given record, but can also come from e.g. a labelfunction (see below for an example).\n- **Source vectors**: A source vector combines the created associations from one logical source. Additionally, it marks whether the respective source vector can be seen as a reference vector, such as a manually labeled source vector containing the *true* record <> label mappings.\n- **Noisy label matrices**: Collection of source vectors that can be analyzed w.r.t. quality metrics (such as the confusion matrix, i.e., true positives etc.), quantity metrics (intersections and conflicts) or weakly supervisable labels.\n\nThe following is an example for building a noisy label matrix for a classification task\n```python\nimport weak_nlp\n\ndef contains_keywords(text):\n    if any(term in text for term in [\"val1\", \"val2\", \"val3\"]):\n        return \"regular\"\n\ntexts = [...]\n\nlf_associations = []\nfor text_id, text in enumerate(texts):\n    label = contains_keywords(text)\n    if label is not None:\n        association = weak_nlp.ClassificationAssociation(text_id + 1, label)\n        lf_associations.append(association)\n\nlf_vector = weak_nlp.SourceVector(contains_keywords.__name__, False, lf_associations)\n\nground_truths = [\n    weak_nlp.ClassificationAssociation(1, \"clickbait\"),\n    weak_nlp.ClassificationAssociation(2, \"regular\"),\n    weak_nlp.ClassificationAssociation(3, \"regular\")\n]\n\ngt_vector = weak_nlp.SourceVector(\"ground_truths\", True, ground_truths)\n\ncnlm = weak_nlp.CNLM([gt_vector, lf_vector])\n```\n\nWhereas for extraction tasks, your code snippet could look as follows:\n```python\nimport weak_nlp\n\ndef match_keywords(text):\n    for idx, token in enumerate(text.split()):\n        if token in [\"val1\", \"val2\", \"val3\"]:\n            yield \"person\", idx, idx+1 # label, from_idx, to_idx\n\ntexts = [...]\n\nlf_associations = []\nfor text_id, text in enumerate(texts):\n    for triplet in match_keywords(text):\n        label, from_idx, to_idx = triplet\n        association = weak_nlp.ExtractionAssociation(text_id + 1, label, from_idx, to_idx)\n        lf_associations.append(association)\n\nlf_vector = weak_nlp.SourceVector(match_keywords.__name__, False, lf_associations)\n\nground_truths = [\n    weak_nlp.ExtractionAssociation(1, \"person\", 1, 2),\n    weak_nlp.ExtractionAssociation(2, \"person\", 4, 5),\n]\n\ngt_vector = weak_nlp.SourceVector(\"ground_truths\", True, ground_truths)\n\nenlm = weak_nlp.ENLM([gt_vector, lf_vector])\n```\n\n## Roadmap\nIf you want to have something added, feel free to open an [issue](https://github.com/code-kern-ai/weak-nlp/issues).\n\n## Contributing\nContributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.\n\nIf you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag \"enhancement\".\nDon't forget to give the project a star! Thanks again!\n\n1. Fork the Project\n2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)\n3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)\n4. Push to the Branch (`git push origin feature/AmazingFeature`)\n5. Open a Pull Request\n\nAnd please don't forget to leave a \u2b50 if you like the work! \n\n## License\nDistributed under the Apache 2.0 License. See LICENSE.txt for more information.\n\n## Contact\nThis library is developed and maintained by [kern.ai](https://github.com/code-kern-ai). If you want to provide us with feedback or have some questions, don't hesitate to contact us. We're super happy to help \u270c\ufe0f\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Intelligent information integration based on weak supervision",
    "version": "0.0.13",
    "project_urls": {
        "Homepage": "https://github.com/code-kern-ai/weak-nlp"
    },
    "split_keywords": [
        "kern.ai",
        "machine learning",
        "supervised learning",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a9889dee6f60189933919d13a1562b0b2664ac0699d6c0d15540f7491a2ff8d4",
                "md5": "523d4528d993d6f0de3deb1e3b08580d",
                "sha256": "5a511d52bd4f339624803a230f8d876442106519b0287a62432c21e6123d571a"
            },
            "downloads": -1,
            "filename": "weak_nlp-0.0.13-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "523d4528d993d6f0de3deb1e3b08580d",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 18465,
            "upload_time": "2023-10-10T09:28:29",
            "upload_time_iso_8601": "2023-10-10T09:28:29.313090Z",
            "url": "https://files.pythonhosted.org/packages/a9/88/9dee6f60189933919d13a1562b0b2664ac0699d6c0d15540f7491a2ff8d4/weak_nlp-0.0.13-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-10 09:28:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "code-kern-ai",
    "github_project": "weak-nlp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "weak-nlp"
}
        
Elapsed time: 0.16298s