nervaluate


Namenervaluate JSON
Version 1.1.0 PyPI version JSON
download
home_pageNone
SummaryNER evaluation considering partial match scoring
upload_time2025-09-06 08:30:38
maintainerNone
docs_urlNone
authorDavid S. Batista, Matthew Upson
requires_python>=3.11
licenseMIT License
keywords named-entity-recognition ner evaluation-metrics partial-match-scoring nlp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![python](https://img.shields.io/badge/Python-3.11-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)
 
[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)
 
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
 
![GitHub](https://img.shields.io/github/license/ivyleavedtoadflax/nervaluate)
 
![Pull Requests Welcome](https://img.shields.io/badge/pull%20requests-welcome-brightgreen.svg)
 
![PyPI](https://img.shields.io/pypi/v/nervaluate)

# nervaluate

`nervaluate` is a module for evaluating Named Entity Recognition (NER) models as defined in the SemEval 2013 - 9.1 task.

The evaluation metrics output by nervaluate go beyond a simple token/tag based schema, and consider different scenarios 
based on whether all the tokens that belong to a named entity were classified or not, and also whether the correct 
entity type was assigned.

This full problem is described in detail in the [original blog](http://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/) 
post by [David Batista](https://github.com/davidsbatista), and this package extends the code in the [original repository](https://github.com/davidsbatista/NER-Evaluation) 
which accompanied the blog post.

The code draws heavily on the papers:

* [SemEval-2013 Task 9 : Extraction of Drug-Drug Interactions from Biomedical Texts (DDIExtraction 2013)](https://www.aclweb.org/anthology/S13-2056)

* [SemEval-2013 Task 9.1 - Evaluation Metrics](https://davidsbatista.net/assets/documents/others/semeval_2013-task-9_1-evaluation-metrics.pdf)

# Usage example

```
pip install nervaluate
```

A possible input format are lists of NER labels, where each list corresponds to a sentence and each label is a token label.
Initialize the `Evaluator` class with the true labels and predicted labels, and specify the entity types we want to evaluate.

```python
from nervaluate.evaluator import Evaluator

true = [
    ['O', 'B-PER', 'I-PER', 'O', 'O', 'O', 'B-ORG', 'I-ORG'],  # "The John Smith who works at Google Inc"
    ['O', 'B-LOC', 'B-PER', 'I-PER', 'O', 'O', 'B-DATE'],      # "In Paris Marie Curie lived in 1895"
]
  
pred = [
    ['O', 'O', 'B-PER', 'I-PER', 'O', 'O', 'B-ORG', 'I-ORG'],
    ['O', 'B-LOC', 'I-LOC', 'B-PER', 'O', 'O', 'B-DATE'],
]
   
evaluator = Evaluator(true, pred, tags=['PER', 'ORG', 'LOC', 'DATE'], loader="list")
```

Print the summary report for the evaluation, which will show the metrics for each entity type and evaluation scenario:

```python

print(evaluator.summary_report())

Scenario: all

              correct   incorrect     partial      missed    spurious   precision      recall    f1-score

ent_type            5           0           0           0           0        1.00        1.00        1.00
   exact            2           3           0           0           0        0.40        0.40        0.40
 partial            2           0           3           0           0        0.40        0.40        0.40
  strict            2           3           0           0           0        0.40        0.40        0.40
```  

or aggregated by entity type under a specific evaluation scenario:

```python
print(evaluator.summary_report(mode='entities'))  
  
Scenario: strict

             correct   incorrect     partial      missed    spurious   precision      recall    f1-score

   DATE            1           0           0           0           0        1.00        1.00        1.00
    LOC            0           1           0           0           0        0.00        0.00        0.00
    ORG            1           0           0           0           0        1.00        1.00        1.00
    PER            0           2           0           0           0        0.00        0.00        0.00
```

# Evaluation Scenarios

## Token level evaluation for NER is too simplistic

When running machine learning models for NER, it is common to report metrics at the individual token level. This may 
not be the best approach, as a named entity can be made up of multiple tokens, so a full-entity accuracy would be 
desirable.

When comparing the golden standard annotations with the output of a NER system different scenarios might occur:

__I. Surface string and entity type match__

| Token | Gold  | Prediction |
|-------|-------|------------|
| in    | O     | O          |
| New   | B-LOC | B-LOC      |
| York  | I-LOC | I-LOC      |
| .     | O     | O          |

__II. System hypothesized an incorrect entity__

| Token    | Gold | Prediction |
|----------|------|------------|
| an       | O    | O          |
| Awful    | O    | B-ORG      |
| Headache | O    | I-ORG      |
| in       | O    | O          |

__III. System misses an entity__

| Token | Gold  | Prediction |
|-------|-------|------------|
| in    | O     | O          |
| Palo  | B-LOC | O          |
| Alto  | I-LOC | O          |
| ,     | O     | O          |

Based on these three scenarios we have a simple classification evaluation that can be measured in terms of false 
positives, true positives, false negatives and false positives, and subsequently compute precision, recall and 
F1-score for each named-entity type.

However, this simple schema ignores the possibility of partial matches or other scenarios when the NER system gets
the named-entity surface string correct but the type wrong. We might also want to evaluate these scenarios 
again at a full-entity level.

For example:

__IV. System identifies the surface string but assigns the wrong entity type__

| Token | Gold  | Prediction |
|-------|-------|------------|
| I     | O     | O          |
| live  | O     | O          |
| in    | O     | O          |
| Palo  | B-LOC | B-ORG      |
| Alto  | I-LOC | I-ORG      |
| ,     | O     | O          |

__V. System gets the boundaries of the surface string wrong__

| Token   | Gold  | Prediction |
|---------|-------|------------|
| Unless  | O     | B-PER      |
| Karl    | B-PER | I-PER      |
| Smith   | I-PER | I-PER      |
| resigns | O     | O          |

__VI. System gets the boundaries and entity type wrong__

| Token   | Gold  | Prediction |
|---------|-------|------------|
| Unless  | O     | B-ORG      |
| Karl    | B-PER | I-ORG      |
| Smith   | I-PER | I-ORG      |
| resigns | O     | O          |


## Defining evaluation metrics

How can we incorporate these described scenarios into evaluation metrics? See the [original blog](http://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/) 
for a great explanation, a summary is included here.

We can define the following five metrics to consider different categories of errors:

| Error type      | Explanation                                                              |
|-----------------|--------------------------------------------------------------------------|
| Correct (COR)   | both are the same                                                        |
| Incorrect (INC) | the output of a system and the golden annotation don’t match             |
| Partial (PAR)   | system and the golden annotation are somewhat “similar” but not the same |
| Missing (MIS)   | a golden annotation is not captured by a system                          |
| Spurious (SPU)  | system produces a response which doesn’t exist in the golden annotation  |

These five metrics can be measured in four different ways:

| Evaluation schema | Explanation                                                                       |
|-------------------|-----------------------------------------------------------------------------------|
| Strict            | exact boundary surface string match and entity type                               |
| Exact             | exact boundary match over the surface string, regardless of the type              |
| Partial           | partial boundary match over the surface string, regardless of the type            |
| Type              | some overlap between the system tagged entity and the gold annotation is required |

These five errors and four evaluation schema interact in the following ways:

| Scenario | Gold entity | Gold string    | Pred entity | Pred string         | Type | Partial | Exact | Strict |
|----------|-------------|----------------|-------------|---------------------|------|---------|-------|--------|
| III      | BRAND       | tikosyn        |             |                     | MIS  | MIS     | MIS   | MIS    |
| II       |             |                | BRAND       | healthy             | SPU  | SPU     | SPU   | SPU    |
| V        | DRUG        | warfarin       | DRUG        | of warfarin         | COR  | PAR     | INC   | INC    |
| IV       | DRUG        | propranolol    | BRAND       | propranolol         | INC  | COR     | COR   | INC    |
| I        | DRUG        | phenytoin      | DRUG        | phenytoin           | COR  | COR     | COR   | COR    |
| VI       | GROUP       | contraceptives | DRUG        | oral contraceptives | INC  | PAR     | INC   | INC    |

Then precision, recall and f1-score are calculated for each different evaluation schema. In order to achieve data, 
two more quantities need to be calculated:

```
POSSIBLE (POS) = COR + INC + PAR + MIS = TP + FN
ACTUAL (ACT) = COR + INC + PAR + SPU = TP + FP
```

Then we can compute precision, recall, f1-score, where roughly describing precision is the percentage of correct 
named-entities found by the NER system. Recall as the percentage of the named-entities in the golden annotations 
that are retrieved by the NER system. 

This is computed in two different ways depending on whether we want an exact  match (i.e., strict and exact ) or a 
partial match (i.e., partial and type) scenario:

__Exact Match (i.e., strict and exact )__
```
Precision = (COR / ACT) = TP / (TP + FP)
Recall = (COR / POS) = TP / (TP+FN)
```

__Partial Match (i.e., partial and type)__
```
Precision = (COR + 0.5 × PAR) / ACT = TP / (TP + FP)
Recall = (COR + 0.5 × PAR)/POS = COR / ACT = TP / (TP + FN)
```

__Putting all together:__

| Measure   | Type | Partial | Exact | Strict |
|-----------|------|---------|-------|--------|
| Correct   | 3    | 3       | 3     | 2      |
| Incorrect | 2    | 0       | 2     | 3      |
| Partial   | 0    | 2       | 0     | 0      |
| Missed    | 1    | 1       | 1     | 1      |
| Spurious  | 1    | 1       | 1     | 1      |
| Precision | 0.5  | 0.66    | 0.5   | 0.33   |
| Recall    | 0.5  | 0.66    | 0.5   | 0.33   |
| F1        | 0.5  | 0.66    | 0.5   | 0.33   |


## Notes:

In scenarios IV and VI the entity type of the `true` and `pred` does not match, in both cases we only scored against 
the true entity, not the predicted one. You can argue that the predicted entity could also be scored as spurious, 
but according to the definition of `spurious`:

* Spurious (SPU) : system produces a response which does not exist in the golden annotation;

In this case there exists an annotation, but with a different entity type, so we assume it's only incorrect.


## Contributing to the `nervaluate` package

### Extending the package to accept more formats

The `Evaluator` accepts the following formats:

* Nested lists containing NER labels
* CoNLL style tab delimited strings
* [prodi.gy](https://prodi.gy) style lists of spans

Additional formats can easily be added by creating a new loader class in `nervaluate/loaders.py`. The  loader class 
should inherit from the `DataLoader` base class and implement the `load` method. 

The `load` method should return a list of entity lists, where each entity is represented as a dictionary 
with `label`, `start`, and `end` keys.

The new loader can then be added to the `_setup_loaders` method in the `Evaluator` class, and can be selected with the
 `loader` argument when instantiating the `Evaluator` class.

Here is list of formats we intend to [include](https://github.com/MantisAI/nervaluate/issues/3).

### General Contributing

Improvements, adding new features and bug fixes are welcome. If you wish to participate in the development of `nervaluate` 
please read the guidelines in the [CONTRIBUTING.md](CONTRIBUTING.md) file.

---

Give a ⭐️ if this project helped you!

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nervaluate",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "named-entity-recognition, ner, evaluation-metrics, partial-match-scoring, nlp",
    "author": "David S. Batista, Matthew Upson",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/3a/db/8f5886bd95e3a68b594dcf347d7be77bcfc0fc5482a5902b551acb2dc380/nervaluate-1.1.0.tar.gz",
    "platform": null,
    "description": "[![python](https://img.shields.io/badge/Python-3.11-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)\n \n[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)\n \n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n \n![GitHub](https://img.shields.io/github/license/ivyleavedtoadflax/nervaluate)\n \n![Pull Requests Welcome](https://img.shields.io/badge/pull%20requests-welcome-brightgreen.svg)\n \n![PyPI](https://img.shields.io/pypi/v/nervaluate)\n\n# nervaluate\n\n`nervaluate` is a module for evaluating Named Entity Recognition (NER) models as defined in the SemEval 2013 - 9.1 task.\n\nThe evaluation metrics output by nervaluate go beyond a simple token/tag based schema, and consider different scenarios \nbased on whether all the tokens that belong to a named entity were classified or not, and also whether the correct \nentity type was assigned.\n\nThis full problem is described in detail in the [original blog](http://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/) \npost by [David Batista](https://github.com/davidsbatista), and this package extends the code in the [original repository](https://github.com/davidsbatista/NER-Evaluation) \nwhich accompanied the blog post.\n\nThe code draws heavily on the papers:\n\n* [SemEval-2013 Task 9 : Extraction of Drug-Drug Interactions from Biomedical Texts (DDIExtraction 2013)](https://www.aclweb.org/anthology/S13-2056)\n\n* [SemEval-2013 Task 9.1 - Evaluation Metrics](https://davidsbatista.net/assets/documents/others/semeval_2013-task-9_1-evaluation-metrics.pdf)\n\n# Usage example\n\n```\npip install nervaluate\n```\n\nA possible input format are lists of NER labels, where each list corresponds to a sentence and each label is a token label.\nInitialize the `Evaluator` class with the true labels and predicted labels, and specify the entity types we want to evaluate.\n\n```python\nfrom nervaluate.evaluator import Evaluator\n\ntrue = [\n    ['O', 'B-PER', 'I-PER', 'O', 'O', 'O', 'B-ORG', 'I-ORG'],  # \"The John Smith who works at Google Inc\"\n    ['O', 'B-LOC', 'B-PER', 'I-PER', 'O', 'O', 'B-DATE'],      # \"In Paris Marie Curie lived in 1895\"\n]\n  \npred = [\n    ['O', 'O', 'B-PER', 'I-PER', 'O', 'O', 'B-ORG', 'I-ORG'],\n    ['O', 'B-LOC', 'I-LOC', 'B-PER', 'O', 'O', 'B-DATE'],\n]\n   \nevaluator = Evaluator(true, pred, tags=['PER', 'ORG', 'LOC', 'DATE'], loader=\"list\")\n```\n\nPrint the summary report for the evaluation, which will show the metrics for each entity type and evaluation scenario:\n\n```python\n\nprint(evaluator.summary_report())\n\nScenario: all\n\n              correct   incorrect     partial      missed    spurious   precision      recall    f1-score\n\nent_type            5           0           0           0           0        1.00        1.00        1.00\n   exact            2           3           0           0           0        0.40        0.40        0.40\n partial            2           0           3           0           0        0.40        0.40        0.40\n  strict            2           3           0           0           0        0.40        0.40        0.40\n```  \n\nor aggregated by entity type under a specific evaluation scenario:\n\n```python\nprint(evaluator.summary_report(mode='entities'))  \n  \nScenario: strict\n\n             correct   incorrect     partial      missed    spurious   precision      recall    f1-score\n\n   DATE            1           0           0           0           0        1.00        1.00        1.00\n    LOC            0           1           0           0           0        0.00        0.00        0.00\n    ORG            1           0           0           0           0        1.00        1.00        1.00\n    PER            0           2           0           0           0        0.00        0.00        0.00\n```\n\n# Evaluation Scenarios\n\n## Token level evaluation for NER is too simplistic\n\nWhen running machine learning models for NER, it is common to report metrics at the individual token level. This may \nnot be the best approach, as a named entity can be made up of multiple tokens, so a full-entity accuracy would be \ndesirable.\n\nWhen comparing the golden standard annotations with the output of a NER system different scenarios might occur:\n\n__I. Surface string and entity type match__\n\n| Token | Gold  | Prediction |\n|-------|-------|------------|\n| in    | O     | O          |\n| New   | B-LOC | B-LOC      |\n| York  | I-LOC | I-LOC      |\n| .     | O     | O          |\n\n__II. System hypothesized an incorrect entity__\n\n| Token    | Gold | Prediction |\n|----------|------|------------|\n| an       | O    | O          |\n| Awful    | O    | B-ORG      |\n| Headache | O    | I-ORG      |\n| in       | O    | O          |\n\n__III. System misses an entity__\n\n| Token | Gold  | Prediction |\n|-------|-------|------------|\n| in    | O     | O          |\n| Palo  | B-LOC | O          |\n| Alto  | I-LOC | O          |\n| ,     | O     | O          |\n\nBased on these three scenarios we have a simple classification evaluation that can be measured in terms of false \npositives, true positives, false negatives and false positives, and subsequently compute precision, recall and \nF1-score for each named-entity type.\n\nHowever, this simple schema ignores the possibility of partial matches or other scenarios when the NER system gets\nthe named-entity surface string correct but the type wrong. We might also want to evaluate these scenarios \nagain at a full-entity level.\n\nFor example:\n\n__IV. System identifies the surface string but assigns the wrong entity type__\n\n| Token | Gold  | Prediction |\n|-------|-------|------------|\n| I     | O     | O          |\n| live  | O     | O          |\n| in    | O     | O          |\n| Palo  | B-LOC | B-ORG      |\n| Alto  | I-LOC | I-ORG      |\n| ,     | O     | O          |\n\n__V. System gets the boundaries of the surface string wrong__\n\n| Token   | Gold  | Prediction |\n|---------|-------|------------|\n| Unless  | O     | B-PER      |\n| Karl    | B-PER | I-PER      |\n| Smith   | I-PER | I-PER      |\n| resigns | O     | O          |\n\n__VI. System gets the boundaries and entity type wrong__\n\n| Token   | Gold  | Prediction |\n|---------|-------|------------|\n| Unless  | O     | B-ORG      |\n| Karl    | B-PER | I-ORG      |\n| Smith   | I-PER | I-ORG      |\n| resigns | O     | O          |\n\n\n## Defining evaluation metrics\n\nHow can we incorporate these described scenarios into evaluation metrics? See the [original blog](http://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/) \nfor a great explanation, a summary is included here.\n\nWe can define the following five metrics to consider different categories of errors:\n\n| Error type      | Explanation                                                              |\n|-----------------|--------------------------------------------------------------------------|\n| Correct (COR)   | both are the same                                                        |\n| Incorrect (INC) | the output of a system and the golden annotation don\u2019t match             |\n| Partial (PAR)   | system and the golden annotation are somewhat \u201csimilar\u201d but not the same |\n| Missing (MIS)   | a golden annotation is not captured by a system                          |\n| Spurious (SPU)  | system produces a response which doesn\u2019t exist in the golden annotation  |\n\nThese five metrics can be measured in four different ways:\n\n| Evaluation schema | Explanation                                                                       |\n|-------------------|-----------------------------------------------------------------------------------|\n| Strict            | exact boundary surface string match and entity type                               |\n| Exact             | exact boundary match over the surface string, regardless of the type              |\n| Partial           | partial boundary match over the surface string, regardless of the type            |\n| Type              | some overlap between the system tagged entity and the gold annotation is required |\n\nThese five errors and four evaluation schema interact in the following ways:\n\n| Scenario | Gold entity | Gold string    | Pred entity | Pred string         | Type | Partial | Exact | Strict |\n|----------|-------------|----------------|-------------|---------------------|------|---------|-------|--------|\n| III      | BRAND       | tikosyn        |             |                     | MIS  | MIS     | MIS   | MIS    |\n| II       |             |                | BRAND       | healthy             | SPU  | SPU     | SPU   | SPU    |\n| V        | DRUG        | warfarin       | DRUG        | of warfarin         | COR  | PAR     | INC   | INC    |\n| IV       | DRUG        | propranolol    | BRAND       | propranolol         | INC  | COR     | COR   | INC    |\n| I        | DRUG        | phenytoin      | DRUG        | phenytoin           | COR  | COR     | COR   | COR    |\n| VI       | GROUP       | contraceptives | DRUG        | oral contraceptives | INC  | PAR     | INC   | INC    |\n\nThen precision, recall and f1-score are calculated for each different evaluation schema. In order to achieve data, \ntwo more quantities need to be calculated:\n\n```\nPOSSIBLE (POS) = COR + INC + PAR + MIS = TP + FN\nACTUAL (ACT) = COR + INC + PAR + SPU = TP + FP\n```\n\nThen we can compute precision, recall, f1-score, where roughly describing precision is the percentage of correct \nnamed-entities found by the NER system. Recall as the percentage of the named-entities in the golden annotations \nthat are retrieved by the NER system. \n\nThis is computed in two different ways depending on whether we want an exact  match (i.e., strict and exact ) or a \npartial match (i.e., partial and type) scenario:\n\n__Exact Match (i.e., strict and exact )__\n```\nPrecision = (COR / ACT) = TP / (TP + FP)\nRecall = (COR / POS) = TP / (TP+FN)\n```\n\n__Partial Match (i.e., partial and type)__\n```\nPrecision = (COR + 0.5 \u00d7 PAR) / ACT = TP / (TP + FP)\nRecall = (COR + 0.5 \u00d7 PAR)/POS = COR / ACT = TP / (TP + FN)\n```\n\n__Putting all together:__\n\n| Measure   | Type | Partial | Exact | Strict |\n|-----------|------|---------|-------|--------|\n| Correct   | 3    | 3       | 3     | 2      |\n| Incorrect | 2    | 0       | 2     | 3      |\n| Partial   | 0    | 2       | 0     | 0      |\n| Missed    | 1    | 1       | 1     | 1      |\n| Spurious  | 1    | 1       | 1     | 1      |\n| Precision | 0.5  | 0.66    | 0.5   | 0.33   |\n| Recall    | 0.5  | 0.66    | 0.5   | 0.33   |\n| F1        | 0.5  | 0.66    | 0.5   | 0.33   |\n\n\n## Notes:\n\nIn scenarios IV and VI the entity type of the `true` and `pred` does not match, in both cases we only scored against \nthe true entity, not the predicted one. You can argue that the predicted entity could also be scored as spurious, \nbut according to the definition of `spurious`:\n\n* Spurious (SPU) : system produces a response which does not exist in the golden annotation;\n\nIn this case there exists an annotation, but with a different entity type, so we assume it's only incorrect.\n\n\n## Contributing to the `nervaluate` package\n\n### Extending the package to accept more formats\n\nThe `Evaluator` accepts the following formats:\n\n* Nested lists containing NER labels\n* CoNLL style tab delimited strings\n* [prodi.gy](https://prodi.gy) style lists of spans\n\nAdditional formats can easily be added by creating a new loader class in `nervaluate/loaders.py`. The  loader class \nshould inherit from the `DataLoader` base class and implement the `load` method. \n\nThe `load` method should return a list of entity lists, where each entity is represented as a dictionary \nwith `label`, `start`, and `end` keys.\n\nThe new loader can then be added to the `_setup_loaders` method in the `Evaluator` class, and can be selected with the\n `loader` argument when instantiating the `Evaluator` class.\n\nHere is list of formats we intend to [include](https://github.com/MantisAI/nervaluate/issues/3).\n\n### General Contributing\n\nImprovements, adding new features and bug fixes are welcome. If you wish to participate in the development of `nervaluate` \nplease read the guidelines in the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n---\n\nGive a \u2b50\ufe0f if this project helped you!\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "NER evaluation considering partial match scoring",
    "version": "1.1.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/MantisAI/nervaluate/issues",
        "Homepage": "https://github.com/MantisAI/nervaluate"
    },
    "split_keywords": [
        "named-entity-recognition",
        " ner",
        " evaluation-metrics",
        " partial-match-scoring",
        " nlp"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c6b694d928c7b60cc487fd8204962d2571ef54886e54a4a45e03573186c76e1a",
                "md5": "4706df4ecbda6ba44f8a0a4d86a0fe4f",
                "sha256": "f808fb577ad413abd0121e8b980ad55434ca3c1c887b0aca5bbe7b16da60dfdf"
            },
            "downloads": -1,
            "filename": "nervaluate-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4706df4ecbda6ba44f8a0a4d86a0fe4f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 16661,
            "upload_time": "2025-09-06T08:30:36",
            "upload_time_iso_8601": "2025-09-06T08:30:36.785269Z",
            "url": "https://files.pythonhosted.org/packages/c6/b6/94d928c7b60cc487fd8204962d2571ef54886e54a4a45e03573186c76e1a/nervaluate-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3adb8f5886bd95e3a68b594dcf347d7be77bcfc0fc5482a5902b551acb2dc380",
                "md5": "eb9f60da064d56ffabc0223390dc5fc4",
                "sha256": "6b8933f99e97e5fcd47aabc86c8a9da24e1f16a43042e411b5c68816b437975e"
            },
            "downloads": -1,
            "filename": "nervaluate-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "eb9f60da064d56ffabc0223390dc5fc4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 40365,
            "upload_time": "2025-09-06T08:30:38",
            "upload_time_iso_8601": "2025-09-06T08:30:38.099798Z",
            "url": "https://files.pythonhosted.org/packages/3a/db/8f5886bd95e3a68b594dcf347d7be77bcfc0fc5482a5902b551acb2dc380/nervaluate-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-06 08:30:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "MantisAI",
    "github_project": "nervaluate",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nervaluate"
}
        
Elapsed time: 0.72680s