text-det-metric


Nametext-det-metric JSON
Version 0.0.7 PyPI version JSON
download
home_pagehttps://github.com/SWHL/TextDetMetric
SummaryTool of computing the metric of text detection
upload_time2024-04-10 02:07:49
maintainerNone
docs_urlNone
authorSWHL
requires_python<3.12,>=3.6
licenseApache-2.0
keywords ocr text-det hmean
VCS
bugtrack_url
requirements Shapely datasets
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## Text Detect Metric
<p align="left">
     <a href=""><img src="https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-pink.svg"></a>
     <a href=""><img src="https://img.shields.io/badge/python->=3.6,<3.12-aff.svg"></a>
     <a href="https://pypi.org/project/text_det_metric/"><img alt="PyPI" src="https://img.shields.io/pypi/v/text_det_metric"></a>
     <a href="https://pepy.tech/project/text_det_metric"><img src="https://static.pepy.tech/personalized-badge/text_det_metric?period=total&units=abbreviation&left_color=grey&right_color=blue&left_text=Downloads "></a>
<a href="https://semver.org/"><img alt="SemVer2.0" src="https://img.shields.io/badge/SemVer-2.0-brightgreen"></a>
     <a href="https://github.com/psf/black"><img src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
</p>


- This library is used to calculate the three metric `Precision`, `Recall` and `H-mean` to evaluate the effect of text detection algorithms. It is used in conjunction with [Modelscope-Text Detection Test Set](https://www.modelscope.cn/datasets/liekkas/text_det_test_dataset/summary).
- Indicator calculation code reference: [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/b13f99607653c220ba94df2a8650edac086b0f37/ppocr/metrics/eval_det_iou.py) and [DB](https://github.com/MhLiao/DB/blob/3c32b808d4412680310d3d28eeb6a2d5bf1566c5/concern/icdar2015_eval/detection/iou.py#L8)


### Evaluate on the custom dataset.
- Here we use the evaluation code of `ch_ppocr_v3_det` on the text detection test set [liekkas/text_det_test_dataset](https://www.modelscope.cn/datasets/liekkas/text_det_test_dataset/summary), and you can use the same analogy.


### Usage
1. Install packages.
    ```bash
    pip install modelscope==1.5.2
    pip install text_det_metric
    ```
2. Run `get_pred_txt.py` to get `pred.txt`
    <details>
    <summary>Click to expand</summary>

    ```python
    from pathlib import Path

    import cv2
    import yaml
    from modelscope.msdatasets import MsDataset
    from tqdm import tqdm

    from det_demos.ch_ppocr_v3_det import TextDetector

    root_dir = Path(__file__).resolve().parent


    def read_yaml(yaml_path):
        with open(yaml_path, "rb") as f:
            data = yaml.load(f, Loader=yaml.Loader)
        return data


    test_data = MsDataset.load(
        "text_det_test_dataset",
        namespace="liekkas",
        subset_name="default",
        split="test",
    )

    config_path = root_dir / 'det_demos' / 'ch_ppocr_v3_det' / 'config.yaml'
    config = read_yaml(str(config_path))

    # Configure the onnx model path.
    config['model_path'] = str(root_dir / 'det_demos' / config['model_path'])

    text_detector = TextDetector(config)

    content = []
    for one_data in tqdm(test_data):
        img_path = one_data.get("image:FILE")

        img = cv2.imread(str(img_path))
        dt_boxes, elapse = text_detector(img)
        content.append(f"{img_path}\t{dt_boxes.tolist()}\t{elapse}")

    with open("pred.txt", "w", encoding="utf-8") as f:
        for v in content:
            f.write(f"{v}\n")
    ```
    </details>
3. Run `compute_metric.py` to get the metrics on the dataset
    ```python
    from text_det_metric import DetectionIoUEvaluator

    metric = DetectionIoUEvaluator()

    # pred_path
    pred_path = "pred.txt"
    metric = metric(pred_path)
    print(metric)
    ```
4. Output
    ```python
    {
        'precision': 0.6958333333333333,
        'recall': 0.8608247422680413,
        'hmean': 0.7695852534562212,
        'avg_elapse': 2.0107483345529307
    }
    ```

### See details for [TextDetMetric](https://github.com/SWHL/TextDetMetric).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/SWHL/TextDetMetric",
    "name": "text-det-metric",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.6",
    "maintainer_email": null,
    "keywords": "ocr, text-det, hmean",
    "author": "SWHL",
    "author_email": "liekkaskono@163.com",
    "download_url": null,
    "platform": "Any",
    "description": "## Text Detect Metric\n<p align=\"left\">\n     <a href=\"\"><img src=\"https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-pink.svg\"></a>\n     <a href=\"\"><img src=\"https://img.shields.io/badge/python->=3.6,<3.12-aff.svg\"></a>\n     <a href=\"https://pypi.org/project/text_det_metric/\"><img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/text_det_metric\"></a>\n     <a href=\"https://pepy.tech/project/text_det_metric\"><img src=\"https://static.pepy.tech/personalized-badge/text_det_metric?period=total&units=abbreviation&left_color=grey&right_color=blue&left_text=Downloads \"></a>\n<a href=\"https://semver.org/\"><img alt=\"SemVer2.0\" src=\"https://img.shields.io/badge/SemVer-2.0-brightgreen\"></a>\n     <a href=\"https://github.com/psf/black\"><img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"></a>\n</p>\n\n\n- This library is used to calculate the three metric `Precision`, `Recall` and `H-mean` to evaluate the effect of text detection algorithms. It is used in conjunction with [Modelscope-Text Detection Test Set](https://www.modelscope.cn/datasets/liekkas/text_det_test_dataset/summary).\n- Indicator calculation code reference: [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/b13f99607653c220ba94df2a8650edac086b0f37/ppocr/metrics/eval_det_iou.py) and [DB](https://github.com/MhLiao/DB/blob/3c32b808d4412680310d3d28eeb6a2d5bf1566c5/concern/icdar2015_eval/detection/iou.py#L8)\n\n\n### Evaluate on the custom dataset.\n- Here we use the evaluation code of `ch_ppocr_v3_det` on the text detection test set [liekkas/text_det_test_dataset](https://www.modelscope.cn/datasets/liekkas/text_det_test_dataset/summary), and you can use the same analogy.\n\n\n### Usage\n1. Install packages.\n    ```bash\n    pip install modelscope==1.5.2\n    pip install text_det_metric\n    ```\n2. Run `get_pred_txt.py` to get `pred.txt`\n    <details>\n    <summary>Click to expand</summary>\n\n    ```python\n    from pathlib import Path\n\n    import cv2\n    import yaml\n    from modelscope.msdatasets import MsDataset\n    from tqdm import tqdm\n\n    from det_demos.ch_ppocr_v3_det import TextDetector\n\n    root_dir = Path(__file__).resolve().parent\n\n\n    def read_yaml(yaml_path):\n        with open(yaml_path, \"rb\") as f:\n            data = yaml.load(f, Loader=yaml.Loader)\n        return data\n\n\n    test_data = MsDataset.load(\n        \"text_det_test_dataset\",\n        namespace=\"liekkas\",\n        subset_name=\"default\",\n        split=\"test\",\n    )\n\n    config_path = root_dir / 'det_demos' / 'ch_ppocr_v3_det' / 'config.yaml'\n    config = read_yaml(str(config_path))\n\n    # Configure the onnx model path.\n    config['model_path'] = str(root_dir / 'det_demos' / config['model_path'])\n\n    text_detector = TextDetector(config)\n\n    content = []\n    for one_data in tqdm(test_data):\n        img_path = one_data.get(\"image:FILE\")\n\n        img = cv2.imread(str(img_path))\n        dt_boxes, elapse = text_detector(img)\n        content.append(f\"{img_path}\\t{dt_boxes.tolist()}\\t{elapse}\")\n\n    with open(\"pred.txt\", \"w\", encoding=\"utf-8\") as f:\n        for v in content:\n            f.write(f\"{v}\\n\")\n    ```\n    </details>\n3. Run `compute_metric.py` to get the metrics on the dataset\n    ```python\n    from text_det_metric import DetectionIoUEvaluator\n\n    metric = DetectionIoUEvaluator()\n\n    # pred_path\n    pred_path = \"pred.txt\"\n    metric = metric(pred_path)\n    print(metric)\n    ```\n4. Output\n    ```python\n    {\n        'precision': 0.6958333333333333,\n        'recall': 0.8608247422680413,\n        'hmean': 0.7695852534562212,\n        'avg_elapse': 2.0107483345529307\n    }\n    ```\n\n### See details for [TextDetMetric](https://github.com/SWHL/TextDetMetric).\n\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Tool of computing the metric of text detection",
    "version": "0.0.7",
    "project_urls": {
        "Homepage": "https://github.com/SWHL/TextDetMetric"
    },
    "split_keywords": [
        "ocr",
        " text-det",
        " hmean"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e5aff0f4cf99e6b6afa83db0bb431b60fdb281ba18f6015e46b99af4f183baa9",
                "md5": "32d35dbcb7514dadb00c84ee1b1bbb49",
                "sha256": "b222536244305d987357d448ce9d823684d8df76168f5ffe9037e84042486a80"
            },
            "downloads": -1,
            "filename": "text_det_metric-0.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "32d35dbcb7514dadb00c84ee1b1bbb49",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.6",
            "size": 9695,
            "upload_time": "2024-04-10T02:07:49",
            "upload_time_iso_8601": "2024-04-10T02:07:49.905391Z",
            "url": "https://files.pythonhosted.org/packages/e5/af/f0f4cf99e6b6afa83db0bb431b60fdb281ba18f6015e46b99af4f183baa9/text_det_metric-0.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-10 02:07:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "SWHL",
    "github_project": "TextDetMetric",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "Shapely",
            "specs": []
        },
        {
            "name": "datasets",
            "specs": []
        }
    ],
    "lcname": "text-det-metric"
}
        
Elapsed time: 0.22094s