tokenization-scorer


Nametokenization-scorer JSON
Version 1.1.8 PyPI version JSON
download
home_pageNone
SummaryPackage for evaluating text tokenizations.
upload_time2025-01-13 10:36:40
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT License
keywords tokenization evaluation natural language processing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # tokenization-scorer     [![PyPI Version](https://img.shields.io/pypi/v/tokenization-scorer.svg)](https://pypi.python.org/pypi/tokenization-scorer) [![test tokenization-scorer](https://github.com/zouharvi/tokenization-scorer/actions/workflows/test.yml/badge.svg)](https://github.com/zouharvi/tokenization-scorer/actions/workflows/test.yml)

Simple package for evaluating text tokenizations.
The input is a text (list of files or stdin) and output a single number.
The higher the number, the better the tokenization.
The intended workflow is to try multiple tokenizations and select the one with the highest number.

It can be used from the command line:

```bash
pip3 install tokenization-scorer

tokenization-scorer -i en-de.tokenized_with_unigramlm.{en,de}
> 0.4826

tokenization-scorer -i en-de.tokenized_with_wordpiece.{en,de}
> 0.5047
```

or within Python:

```python
import tokenization_scorer
text1 = "pick @@ed pick @@l @@ed pick @@les"
tokenization_scorer.score(text1, metric="renyi", power=2.5)
> 0.8031528501359657

text2 = "pick @@e @@d pick @@l @@e @@d pick @@l @@e @@s"
tokenization_scorer.score(text2, metric="renyi", power=2.5)
> 0.9105681923824472
```

Use `tokenization-scorer -h` to get an overview of supported metrics.
This package is a side-product of the paper [Tokenization and the Noiseless Channel](https://aclanthology.org/2023.acl-long.284/).

```
@inproceedings{tokenization_noiseless, 
    title={Tokenization and the Noiseless Channel},
    author={Zouhar, Vilém and Meister, Clara and Gastaldi, Juan Luis and Sachan, Mrinmaya and Cotterell, Ryan},
    booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics},
    year={2023},
    url={https://aclanthology.org/2023.acl-long.284/},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "tokenization-scorer",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "tokenization, evaluation, natural language processing",
    "author": null,
    "author_email": "Vil\u00e9m Zouhar <vilem.zouhar@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/ed/fe/3904122e2dc1fa5b2408c7b756644708d794732249a15985a7ca943a0a86/tokenization_scorer-1.1.8.tar.gz",
    "platform": null,
    "description": "# tokenization-scorer &nbsp;&nbsp;&nbsp; [![PyPI Version](https://img.shields.io/pypi/v/tokenization-scorer.svg)](https://pypi.python.org/pypi/tokenization-scorer) [![test tokenization-scorer](https://github.com/zouharvi/tokenization-scorer/actions/workflows/test.yml/badge.svg)](https://github.com/zouharvi/tokenization-scorer/actions/workflows/test.yml)\n\nSimple package for evaluating text tokenizations.\nThe input is a text (list of files or stdin) and output a single number.\nThe higher the number, the better the tokenization.\nThe intended workflow is to try multiple tokenizations and select the one with the highest number.\n\nIt can be used from the command line:\n\n```bash\npip3 install tokenization-scorer\n\ntokenization-scorer -i en-de.tokenized_with_unigramlm.{en,de}\n> 0.4826\n\ntokenization-scorer -i en-de.tokenized_with_wordpiece.{en,de}\n> 0.5047\n```\n\nor within Python:\n\n```python\nimport tokenization_scorer\ntext1 = \"pick @@ed pick @@l @@ed pick @@les\"\ntokenization_scorer.score(text1, metric=\"renyi\", power=2.5)\n> 0.8031528501359657\n\ntext2 = \"pick @@e @@d pick @@l @@e @@d pick @@l @@e @@s\"\ntokenization_scorer.score(text2, metric=\"renyi\", power=2.5)\n> 0.9105681923824472\n```\n\nUse `tokenization-scorer -h` to get an overview of supported metrics.\nThis package is a side-product of the paper [Tokenization and the Noiseless Channel](https://aclanthology.org/2023.acl-long.284/).\n\n```\n@inproceedings{tokenization_noiseless, \n    title={Tokenization and the Noiseless Channel},\n    author={Zouhar, Vil\u00e9m and Meister, Clara and Gastaldi, Juan Luis and Sachan, Mrinmaya and Cotterell, Ryan},\n    booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics},\n    year={2023},\n    url={https://aclanthology.org/2023.acl-long.284/},\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Package for evaluating text tokenizations.",
    "version": "1.1.8",
    "project_urls": {
        "Issues": "https://github.com/zouharvi/tokenization-scorer/issues",
        "Repository": "https://github.com/zouharvi/tokenization-scorer"
    },
    "split_keywords": [
        "tokenization",
        " evaluation",
        " natural language processing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "afe6d2b0c5e8ee4d40c97e0eaa5973342d310059528cb4d611a3d39f44d4464f",
                "md5": "f0bf86fbd54d22c4c4f655c1eb883501",
                "sha256": "662663ed029ae165ce5968d962622fd34539c533ecb521510b6b20db82e35ad4"
            },
            "downloads": -1,
            "filename": "tokenization_scorer-1.1.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f0bf86fbd54d22c4c4f655c1eb883501",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 6382,
            "upload_time": "2025-01-13T10:36:35",
            "upload_time_iso_8601": "2025-01-13T10:36:35.587299Z",
            "url": "https://files.pythonhosted.org/packages/af/e6/d2b0c5e8ee4d40c97e0eaa5973342d310059528cb4d611a3d39f44d4464f/tokenization_scorer-1.1.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "edfe3904122e2dc1fa5b2408c7b756644708d794732249a15985a7ca943a0a86",
                "md5": "9a35a17df0761b8961ed073d8c24815b",
                "sha256": "03ebaa67a11b5aa5b29515d42a263a11ee0627c8e4f6d2ebb34eb159a227fd43"
            },
            "downloads": -1,
            "filename": "tokenization_scorer-1.1.8.tar.gz",
            "has_sig": false,
            "md5_digest": "9a35a17df0761b8961ed073d8c24815b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 5291,
            "upload_time": "2025-01-13T10:36:40",
            "upload_time_iso_8601": "2025-01-13T10:36:40.245799Z",
            "url": "https://files.pythonhosted.org/packages/ed/fe/3904122e2dc1fa5b2408c7b756644708d794732249a15985a7ca943a0a86/tokenization_scorer-1.1.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-13 10:36:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zouharvi",
    "github_project": "tokenization-scorer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "tokenization-scorer"
}
        
Elapsed time: 2.00800s