fadoudou2


Namefadoudou2 JSON
Version 2.7.0.3.8 PyPI version JSON
download
home_pagehttps://github.com/PaddlePaddle/PaddleOCR
SummaryAwesome OCR toolkits based on PaddlePaddle (8.6M ultra-lightweight pre-trained model, support training and deployment among server, mobile, embeded and IoT devices)
upload_time2024-04-01 09:38:58
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseApache License 2.0
keywords ocr textdetection textrecognition paddleocr crnn east star-net rosetta ocrlite db chineseocr chinesetextdetection chinesetextrecognition
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Paddleocr Package

## 1 Get started quickly
### 1.1 install package
install by pypi
```bash
pip install "paddleocr>=2.0.1" # Recommend to use version 2.0.1+
```

build own whl package and install
```bash
python3 setup.py bdist_wheel
pip3 install dist/paddleocr-x.x.x-py3-none-any.whl # x.x.x is the version of paddleocr
```
## 2 Use
### 2.1 Use by code
The paddleocr whl package will automatically download the ppocr lightweight model as the default model, which can be customized and replaced according to the section 3 **Custom Model**.

* detection angle classification and recognition
```python
from paddleocr import PaddleOCR,draw_ocr
# Paddleocr supports Chinese, English, French, German, Korean and Japanese.
# You can set the parameter `lang` as `ch`, `en`, `french`, `german`, `korean`, `japan`
# to switch the language model in order.
ocr = PaddleOCR(use_angle_cls=True, lang='en') # need to run only once to download and load model into memory
img_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'
result = ocr.ocr(img_path, cls=True)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)

# draw result
from PIL import Image
result = result[0]
image = Image.open(img_path).convert('RGB')
boxes = [line[0] for line in result]
txts = [line[1][0] for line in result]
scores = [line[1][1] for line in result]
im_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```

Output will be a list, each item contains bounding box, text and recognition confidence
```bash
[[[442.0, 173.0], [1169.0, 173.0], [1169.0, 225.0], [442.0, 225.0]], ['ACKNOWLEDGEMENTS', 0.99283075]]
[[[393.0, 340.0], [1207.0, 342.0], [1207.0, 389.0], [393.0, 387.0]], ['We would like to thank all the designers and', 0.9357758]]
[[[399.0, 398.0], [1204.0, 398.0], [1204.0, 433.0], [399.0, 433.0]], ['contributors whohave been involved in the', 0.9592447]]
......
```

Visualization of results

<div align="center">
    <img src="../imgs_results/whl/12_det_rec.jpg" width="800">
</div>

* detection and recognition
```python
from paddleocr import PaddleOCR,draw_ocr
ocr = PaddleOCR(lang='en') # need to run only once to download and load model into memory
img_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'
result = ocr.ocr(img_path, cls=False)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)

# draw result
from PIL import Image
result = result[0]
image = Image.open(img_path).convert('RGB')
boxes = [line[0] for line in result]
txts = [line[1][0] for line in result]
scores = [line[1][1] for line in result]
im_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```

Output will be a list, each item contains bounding box, text and recognition confidence
```bash
[[[442.0, 173.0], [1169.0, 173.0], [1169.0, 225.0], [442.0, 225.0]], ['ACKNOWLEDGEMENTS', 0.99283075]]
[[[393.0, 340.0], [1207.0, 342.0], [1207.0, 389.0], [393.0, 387.0]], ['We would like to thank all the designers and', 0.9357758]]
[[[399.0, 398.0], [1204.0, 398.0], [1204.0, 433.0], [399.0, 433.0]], ['contributors whohave been involved in the', 0.9592447]]
......
```

Visualization of results

<div align="center">
    <img src="../imgs_results/whl/12_det_rec.jpg" width="800">
</div>

* classification and recognition
```python
from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en') # need to run only once to load model into memory
img_path = 'PaddleOCR/doc/imgs_words_en/word_10.png'
result = ocr.ocr(img_path, det=False, cls=True)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)
```

Output will be a list, each item contains recognition text and confidence
```bash
['PAIN', 0.990372]
```

* only detection
```python
from paddleocr import PaddleOCR,draw_ocr
ocr = PaddleOCR() # need to run only once to download and load model into memory
img_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'
result = ocr.ocr(img_path,rec=False)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)

# draw result
from PIL import Image
result = result[0]
image = Image.open(img_path).convert('RGB')
im_show = draw_ocr(image, result, txts=None, scores=None, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```

Output will be a list, each item only contains bounding box
```bash
[[756.0, 812.0], [805.0, 812.0], [805.0, 830.0], [756.0, 830.0]]
[[820.0, 803.0], [1085.0, 801.0], [1085.0, 836.0], [820.0, 838.0]]
[[393.0, 801.0], [715.0, 805.0], [715.0, 839.0], [393.0, 836.0]]
......
```

Visualization of results

<div align="center">
    <img src="../imgs_results/whl/12_det.jpg" width="800">
</div>

* only recognition
```python
from paddleocr import PaddleOCR
ocr = PaddleOCR(lang='en') # need to run only once to load model into memory
img_path = 'PaddleOCR/doc/imgs_words_en/word_10.png'
result = ocr.ocr(img_path, det=False, cls=False)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)
```

Output will be a list, each item contains recognition text and confidence
```bash
['PAIN', 0.990372]
```

* only classification
```python
from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True) # need to run only once to load model into memory
img_path = 'PaddleOCR/doc/imgs_words_en/word_10.png'
result = ocr.ocr(img_path, det=False, rec=False, cls=True)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)
```

Output will be a list, each item contains classification result and confidence
```bash
['0', 0.99999964]
```

### 2.2 Use by command line

show help information
```bash
paddleocr -h
```

* detection classification and recognition
```bash
paddleocr --image_dir PaddleOCR/doc/imgs_en/img_12.jpg --use_angle_cls true --lang en
```

Output will be a list, each item contains bounding box, text and recognition confidence
```bash
[[[441.0, 174.0], [1166.0, 176.0], [1165.0, 222.0], [441.0, 221.0]], ('ACKNOWLEDGEMENTS', 0.9971134662628174)]
[[[403.0, 346.0], [1204.0, 348.0], [1204.0, 384.0], [402.0, 383.0]], ('We would like to thank all the designers and', 0.9761400818824768)]
[[[403.0, 396.0], [1204.0, 398.0], [1204.0, 434.0], [402.0, 433.0]], ('contributors who have been involved in the', 0.9791957139968872)]
......
```

pdf file is also supported, you can infer the first few pages by using the `page_num` parameter, the default is 0, which means infer all pages
```bash
paddleocr --image_dir ./xxx.pdf --use_angle_cls true --use_gpu false --page_num 2
```

* detection and recognition
```bash
paddleocr --image_dir PaddleOCR/doc/imgs_en/img_12.jpg --lang en
```

Output will be a list, each item contains bounding box, text and recognition confidence
```bash
[[[441.0, 174.0], [1166.0, 176.0], [1165.0, 222.0], [441.0, 221.0]], ('ACKNOWLEDGEMENTS', 0.9971134662628174)]
[[[403.0, 346.0], [1204.0, 348.0], [1204.0, 384.0], [402.0, 383.0]], ('We would like to thank all the designers and', 0.9761400818824768)]
[[[403.0, 396.0], [1204.0, 398.0], [1204.0, 434.0], [402.0, 433.0]], ('contributors who have been involved in the', 0.9791957139968872)]
......
```

* classification and recognition
```bash
paddleocr --image_dir PaddleOCR/doc/imgs_words_en/word_10.png --use_angle_cls true --det false --lang en
```

Output will be a list, each item contains text and recognition confidence
```bash
['PAIN', 0.9934559464454651]
```

* only detection
```bash
paddleocr --image_dir PaddleOCR/doc/imgs_en/img_12.jpg --rec false
```

Output will be a list, each item only contains bounding box
```bash
[[397.0, 802.0], [1092.0, 802.0], [1092.0, 841.0], [397.0, 841.0]]
[[397.0, 750.0], [1211.0, 750.0], [1211.0, 789.0], [397.0, 789.0]]
[[397.0, 702.0], [1209.0, 698.0], [1209.0, 734.0], [397.0, 738.0]]
......
```

* only recognition
```bash
paddleocr --image_dir PaddleOCR/doc/imgs_words_en/word_10.png --det false --lang en
```

Output will be a list, each item contains text and recognition confidence
```bash
['PAIN', 0.9934559464454651]
```

* only classification
```bash
paddleocr --image_dir PaddleOCR/doc/imgs_words_en/word_10.png --use_angle_cls true --det false --rec false
```

Output will be a list, each item contains classification result and confidence
```bash
['0', 0.99999964]
```

## 3 Use custom model
When the built-in model cannot meet the needs, you need to use your own trained model.
First, refer to [export](./detection_en.md#4-inference) doc to convert your det and rec model to inference model, and then use it as follows

### 3.1 Use by code

```python
from paddleocr import PaddleOCR,draw_ocr
# The path of detection and recognition model must contain model and params files
ocr = PaddleOCR(det_model_dir='{your_det_model_dir}', rec_model_dir='{your_rec_model_dir}', rec_char_dict_path='{your_rec_char_dict_path}', cls_model_dir='{your_cls_model_dir}', use_angle_cls=True)
img_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'
result = ocr.ocr(img_path, cls=True)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)

# draw result
from PIL import Image
result = result[0]
image = Image.open(img_path).convert('RGB')
boxes = [line[0] for line in result]
txts = [line[1][0] for line in result]
scores = [line[1][1] for line in result]
im_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```

### 3.2 Use by command line

```bash
paddleocr --image_dir PaddleOCR/doc/imgs/11.jpg --det_model_dir {your_det_model_dir} --rec_model_dir {your_rec_model_dir} --rec_char_dict_path {your_rec_char_dict_path} --cls_model_dir {your_cls_model_dir} --use_angle_cls true
```

## 4 Use web images or numpy array as input

### 4.1 Web image

- Use by code
```python
from paddleocr import PaddleOCR, draw_ocr
ocr = PaddleOCR(use_angle_cls=True, lang="ch") # need to run only once to download and load model into memory
img_path = 'http://n.sinaimg.cn/ent/transform/w630h933/20171222/o111-fypvuqf1838418.jpg'
result = ocr.ocr(img_path, cls=True)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)

# show result
from PIL import Image
result = result[0]
image = Image.open(img_path).convert('RGB')
boxes = [line[0] for line in result]
txts = [line[1][0] for line in result]
scores = [line[1][1] for line in result]
im_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```
- Use by command line
```bash
paddleocr --image_dir http://n.sinaimg.cn/ent/transform/w630h933/20171222/o111-fypvuqf1838418.jpg --use_angle_cls=true
```

### 4.2 Numpy array
Support numpy array as input only when used by code

```python
import cv2
from paddleocr import PaddleOCR, draw_ocr, download_with_progressbar
ocr = PaddleOCR(use_angle_cls=True, lang="ch") # need to run only once to download and load model into memory
img_path = 'PaddleOCR/doc/imgs/11.jpg'
img = cv2.imread(img_path)
# img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY), If your own training model supports grayscale images, you can uncomment this line
result = ocr.ocr(img, cls=True)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)

# show result
from PIL import Image
result = result[0]
download_with_progressbar(img_path, 'tmp.jpg')
image = Image.open('tmp.jpg').convert('RGB')
boxes = [line[0] for line in result]
txts = [line[1][0] for line in result]
scores = [line[1][1] for line in result]
im_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')
```
## 5 PDF file
- Use by command line

you can infer the first few pages by using the `page_num` parameter, the default is 0, which means infer all pages
```bash
paddleocr --image_dir ./xxx.pdf --use_angle_cls true --use_gpu false --page_num 2
```
- Use by code

```python
from paddleocr import PaddleOCR, draw_ocr

# Paddleocr supports Chinese, English, French, German, Korean and Japanese.
# You can set the parameter `lang` as `ch`, `en`, `fr`, `german`, `korean`, `japan`
# to switch the language model in order.
ocr = PaddleOCR(use_angle_cls=True, lang="ch", page_num=2)  # need to run only once to download and load model into memory
img_path = './xxx.pdf'
result = ocr.ocr(img_path, cls=True)
for idx in range(len(result)):
    res = result[idx]
    for line in res:
        print(line)

# draw result
import fitz
from PIL import Image
import cv2
import numpy as np
imgs = []
with fitz.open(img_path) as pdf:
    for pg in range(0, pdf.pageCount):
        page = pdf[pg]
        mat = fitz.Matrix(2, 2)
        pm = page.getPixmap(matrix=mat, alpha=False)
        # if width or height > 2000 pixels, don't enlarge the image
        if pm.width > 2000 or pm.height > 2000:
            pm = page.getPixmap(matrix=fitz.Matrix(1, 1), alpha=False)

        img = Image.frombytes("RGB", [pm.width, pm.height], pm.samples)
        img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
        imgs.append(img)
for idx in range(len(result)):
    res = result[idx]
    image = imgs[idx]
    boxes = [line[0] for line in res]
    txts = [line[1][0] for line in res]
    scores = [line[1][1] for line in res]
    im_show = draw_ocr(image, boxes, txts, scores, font_path='doc/fonts/simfang.ttf')
    im_show = Image.fromarray(im_show)
    im_show.save('result_page_{}.jpg'.format(idx))
```

## 6 Parameter Description

| Parameter                    | Description                                                                                                                                                                                                                 | Default value                  |
|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|
| use_gpu                 | use GPU or not                                                                                                                                                                                                          | TRUE                    |
| gpu_mem                 | GPU memory size used for initialization                                                                                                                                                                                              | 8000M                   |
| image_dir               | The images path or folder path for predicting when used by the command line                                                                                                                                                                           |                         |
| page_num               | Valid when the input type is pdf file, specify to predict the previous page_num pages, all pages are predicted by default                                                                                                                                                                           |          0               |
| det_algorithm           | Type of detection algorithm selected                                                                                                                                                                                                   | DB                      |
| det_model_dir           | the text detection inference model folder. There are two ways to transfer parameters, 1. None: Automatically download the built-in model to `~/.paddleocr/det`; 2. The path of the inference model converted by yourself, the model and params files must be included in the model path | None           |
| det_max_side_len        | The maximum size of the long side of the image. When the long side exceeds this value, the long side will be resized to this size, and the short side will be scaled proportionally                                                                                                                         | 960                     |
| det_db_thresh           | Binarization threshold value of DB output map                                                                                                                                                                                        | 0.3                     |
| det_db_box_thresh       | The threshold value of the DB output box. Boxes score lower than this value will be discarded                                                                                                                                                                         | 0.5                     |
| det_db_unclip_ratio     | The expanded ratio of DB output box                                                                                                                                                                                             | 2                       |
| det_db_score_mode |  The parameter that control how the score of the detection frame is calculated. There are 'fast' and 'slow' options. If the text to be detected is curved, it is recommended to use 'slow'  | 'fast' |
| det_east_score_thresh   | Binarization threshold value of EAST output map                                                                                                                                                                                       | 0.8                     |
| det_east_cover_thresh   | The threshold value of the EAST output box. Boxes score lower than this value will be discarded                                                                                                                                                                         | 0.1                     |
| det_east_nms_thresh     | The NMS threshold value of EAST model output box                                                                                                                                                                                              | 0.2                     |
| rec_algorithm           | Type of recognition algorithm selected                                                                                                                                                                                                | CRNN                    |
| rec_model_dir           | the text recognition inference model folder. There are two ways to transfer parameters, 1. None: Automatically download the built-in model to `~/.paddleocr/rec`; 2. The path of the inference model converted by yourself, the model and params files must be included in the model path | None |
| rec_image_shape         | image shape of recognition algorithm                                                                                                                                                                                            | "3,32,320"              |
| rec_batch_num           | When performing recognition, the batchsize of forward images                                                                                                                                                                                         | 30                      |
| max_text_length         | The maximum text length that the recognition algorithm can recognize                                                                                                                                                                                         | 25                      |
| rec_char_dict_path      | the alphabet path which needs to be modified to your own path when `rec_model_Name` use mode 2                                                                                                                                              | ./ppocr/utils/ppocr_keys_v1.txt                        |
| use_space_char          | Whether to recognize spaces                                                                                                                                                                                                         | TRUE                    |
| drop_score          | Filter the output by score (from the recognition model), and those below this score will not be returned                                                                                                                                                                                                        | 0.5                    |
| use_angle_cls          | Whether to load classification model                                                                                                                                                                                                       | FALSE                    |
| cls_model_dir           | the classification inference model folder. There are two ways to transfer parameters, 1. None: Automatically download the built-in model to `~/.paddleocr/cls`; 2. The path of the inference model converted by yourself, the model and params files must be included in the model path | None |
| cls_image_shape         | image shape of classification algorithm                                                                                                                                                                                            | "3,48,192"              |
| label_list         | label list of classification algorithm                                                                                                                                                                                            | ['0','180']           |
| cls_batch_num           | When performing classification, the batchsize of forward images                                                                                                                                                                                         | 30                      |
| enable_mkldnn           | Whether to enable mkldnn                                                                                                                                                                                                       | FALSE                   |
| use_zero_copy_run           | Whether to forward by zero_copy_run                                                                                                                                                                               | FALSE                   |
| lang                     | The support language, now only Chinese(ch)、English(en)、French(french)、German(german)、Korean(korean)、Japanese(japan) are supported                                                                                                                                                                                                  | ch                    |
| det                     | Enable detction when `ppocr.ocr` func exec                                                                                                                                                                                                   | TRUE                    |
| rec                     | Enable recognition when `ppocr.ocr` func exec                                                                                                                                                                                                   | TRUE                    |
| cls                     | Enable classification when `ppocr.ocr` func exec((Use use_angle_cls in command line mode to control whether to start classification in the forward direction)                                                                                                                                                                                                   | FALSE                    |
| show_log                     | Whether to print log| FALSE                    |
| type                     | Perform ocr or table structuring, the value is selected in ['ocr','structure']                                                                                                                                                                                             | ocr                    |
| ocr_version                     | OCR Model version number, the current model support list is as follows: PP-OCRv3 supports Chinese and English detection, recognition, multilingual recognition, direction classifier models, PP-OCRv2 support Chinese detection and recognition model, PP-OCR support Chinese detection, recognition and direction classifier, multilingual recognition model | PP-OCRv3                 |

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/PaddlePaddle/PaddleOCR",
    "name": "fadoudou2",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "ocr textdetection textrecognition paddleocr crnn east star-net rosetta ocrlite db chineseocr chinesetextdetection chinesetextrecognition",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/05/f8/bc025bec95658a150d546b8c1e5b99031635760566353cadfee6d81181b5/fadoudou2-2.7.0.3.8.tar.gz",
    "platform": null,
    "description": "# Paddleocr Package\r\n\r\n## 1 Get started quickly\r\n### 1.1 install package\r\ninstall by pypi\r\n```bash\r\npip install \"paddleocr>=2.0.1\" # Recommend to use version 2.0.1+\r\n```\r\n\r\nbuild own whl package and install\r\n```bash\r\npython3 setup.py bdist_wheel\r\npip3 install dist/paddleocr-x.x.x-py3-none-any.whl # x.x.x is the version of paddleocr\r\n```\r\n## 2 Use\r\n### 2.1 Use by code\r\nThe paddleocr whl package will automatically download the ppocr lightweight model as the default model, which can be customized and replaced according to the section 3 **Custom Model**.\r\n\r\n* detection angle classification and recognition\r\n```python\r\nfrom paddleocr import PaddleOCR,draw_ocr\r\n# Paddleocr supports Chinese, English, French, German, Korean and Japanese.\r\n# You can set the parameter `lang` as `ch`, `en`, `french`, `german`, `korean`, `japan`\r\n# to switch the language model in order.\r\nocr = PaddleOCR(use_angle_cls=True, lang='en') # need to run only once to download and load model into memory\r\nimg_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'\r\nresult = ocr.ocr(img_path, cls=True)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n\r\n# draw result\r\nfrom PIL import Image\r\nresult = result[0]\r\nimage = Image.open(img_path).convert('RGB')\r\nboxes = [line[0] for line in result]\r\ntxts = [line[1][0] for line in result]\r\nscores = [line[1][1] for line in result]\r\nim_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')\r\nim_show = Image.fromarray(im_show)\r\nim_show.save('result.jpg')\r\n```\r\n\r\nOutput will be a list, each item contains bounding box, text and recognition confidence\r\n```bash\r\n[[[442.0, 173.0], [1169.0, 173.0], [1169.0, 225.0], [442.0, 225.0]], ['ACKNOWLEDGEMENTS', 0.99283075]]\r\n[[[393.0, 340.0], [1207.0, 342.0], [1207.0, 389.0], [393.0, 387.0]], ['We would like to thank all the designers and', 0.9357758]]\r\n[[[399.0, 398.0], [1204.0, 398.0], [1204.0, 433.0], [399.0, 433.0]], ['contributors whohave been involved in the', 0.9592447]]\r\n......\r\n```\r\n\r\nVisualization of results\r\n\r\n<div align=\"center\">\r\n    <img src=\"../imgs_results/whl/12_det_rec.jpg\" width=\"800\">\r\n</div>\r\n\r\n* detection and recognition\r\n```python\r\nfrom paddleocr import PaddleOCR,draw_ocr\r\nocr = PaddleOCR(lang='en') # need to run only once to download and load model into memory\r\nimg_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'\r\nresult = ocr.ocr(img_path, cls=False)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n\r\n# draw result\r\nfrom PIL import Image\r\nresult = result[0]\r\nimage = Image.open(img_path).convert('RGB')\r\nboxes = [line[0] for line in result]\r\ntxts = [line[1][0] for line in result]\r\nscores = [line[1][1] for line in result]\r\nim_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')\r\nim_show = Image.fromarray(im_show)\r\nim_show.save('result.jpg')\r\n```\r\n\r\nOutput will be a list, each item contains bounding box, text and recognition confidence\r\n```bash\r\n[[[442.0, 173.0], [1169.0, 173.0], [1169.0, 225.0], [442.0, 225.0]], ['ACKNOWLEDGEMENTS', 0.99283075]]\r\n[[[393.0, 340.0], [1207.0, 342.0], [1207.0, 389.0], [393.0, 387.0]], ['We would like to thank all the designers and', 0.9357758]]\r\n[[[399.0, 398.0], [1204.0, 398.0], [1204.0, 433.0], [399.0, 433.0]], ['contributors whohave been involved in the', 0.9592447]]\r\n......\r\n```\r\n\r\nVisualization of results\r\n\r\n<div align=\"center\">\r\n    <img src=\"../imgs_results/whl/12_det_rec.jpg\" width=\"800\">\r\n</div>\r\n\r\n* classification and recognition\r\n```python\r\nfrom paddleocr import PaddleOCR\r\nocr = PaddleOCR(use_angle_cls=True, lang='en') # need to run only once to load model into memory\r\nimg_path = 'PaddleOCR/doc/imgs_words_en/word_10.png'\r\nresult = ocr.ocr(img_path, det=False, cls=True)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n```\r\n\r\nOutput will be a list, each item contains recognition text and confidence\r\n```bash\r\n['PAIN', 0.990372]\r\n```\r\n\r\n* only detection\r\n```python\r\nfrom paddleocr import PaddleOCR,draw_ocr\r\nocr = PaddleOCR() # need to run only once to download and load model into memory\r\nimg_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'\r\nresult = ocr.ocr(img_path,rec=False)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n\r\n# draw result\r\nfrom PIL import Image\r\nresult = result[0]\r\nimage = Image.open(img_path).convert('RGB')\r\nim_show = draw_ocr(image, result, txts=None, scores=None, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')\r\nim_show = Image.fromarray(im_show)\r\nim_show.save('result.jpg')\r\n```\r\n\r\nOutput will be a list, each item only contains bounding box\r\n```bash\r\n[[756.0, 812.0], [805.0, 812.0], [805.0, 830.0], [756.0, 830.0]]\r\n[[820.0, 803.0], [1085.0, 801.0], [1085.0, 836.0], [820.0, 838.0]]\r\n[[393.0, 801.0], [715.0, 805.0], [715.0, 839.0], [393.0, 836.0]]\r\n......\r\n```\r\n\r\nVisualization of results\r\n\r\n<div align=\"center\">\r\n    <img src=\"../imgs_results/whl/12_det.jpg\" width=\"800\">\r\n</div>\r\n\r\n* only recognition\r\n```python\r\nfrom paddleocr import PaddleOCR\r\nocr = PaddleOCR(lang='en') # need to run only once to load model into memory\r\nimg_path = 'PaddleOCR/doc/imgs_words_en/word_10.png'\r\nresult = ocr.ocr(img_path, det=False, cls=False)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n```\r\n\r\nOutput will be a list, each item contains recognition text and confidence\r\n```bash\r\n['PAIN', 0.990372]\r\n```\r\n\r\n* only classification\r\n```python\r\nfrom paddleocr import PaddleOCR\r\nocr = PaddleOCR(use_angle_cls=True) # need to run only once to load model into memory\r\nimg_path = 'PaddleOCR/doc/imgs_words_en/word_10.png'\r\nresult = ocr.ocr(img_path, det=False, rec=False, cls=True)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n```\r\n\r\nOutput will be a list, each item contains classification result and confidence\r\n```bash\r\n['0', 0.99999964]\r\n```\r\n\r\n### 2.2 Use by command line\r\n\r\nshow help information\r\n```bash\r\npaddleocr -h\r\n```\r\n\r\n* detection classification and recognition\r\n```bash\r\npaddleocr --image_dir PaddleOCR/doc/imgs_en/img_12.jpg --use_angle_cls true --lang en\r\n```\r\n\r\nOutput will be a list, each item contains bounding box, text and recognition confidence\r\n```bash\r\n[[[441.0, 174.0], [1166.0, 176.0], [1165.0, 222.0], [441.0, 221.0]], ('ACKNOWLEDGEMENTS', 0.9971134662628174)]\r\n[[[403.0, 346.0], [1204.0, 348.0], [1204.0, 384.0], [402.0, 383.0]], ('We would like to thank all the designers and', 0.9761400818824768)]\r\n[[[403.0, 396.0], [1204.0, 398.0], [1204.0, 434.0], [402.0, 433.0]], ('contributors who have been involved in the', 0.9791957139968872)]\r\n......\r\n```\r\n\r\npdf file is also supported, you can infer the first few pages by using the `page_num` parameter, the default is 0, which means infer all pages\r\n```bash\r\npaddleocr --image_dir ./xxx.pdf --use_angle_cls true --use_gpu false --page_num 2\r\n```\r\n\r\n* detection and recognition\r\n```bash\r\npaddleocr --image_dir PaddleOCR/doc/imgs_en/img_12.jpg --lang en\r\n```\r\n\r\nOutput will be a list, each item contains bounding box, text and recognition confidence\r\n```bash\r\n[[[441.0, 174.0], [1166.0, 176.0], [1165.0, 222.0], [441.0, 221.0]], ('ACKNOWLEDGEMENTS', 0.9971134662628174)]\r\n[[[403.0, 346.0], [1204.0, 348.0], [1204.0, 384.0], [402.0, 383.0]], ('We would like to thank all the designers and', 0.9761400818824768)]\r\n[[[403.0, 396.0], [1204.0, 398.0], [1204.0, 434.0], [402.0, 433.0]], ('contributors who have been involved in the', 0.9791957139968872)]\r\n......\r\n```\r\n\r\n* classification and recognition\r\n```bash\r\npaddleocr --image_dir PaddleOCR/doc/imgs_words_en/word_10.png --use_angle_cls true --det false --lang en\r\n```\r\n\r\nOutput will be a list, each item contains text and recognition confidence\r\n```bash\r\n['PAIN', 0.9934559464454651]\r\n```\r\n\r\n* only detection\r\n```bash\r\npaddleocr --image_dir PaddleOCR/doc/imgs_en/img_12.jpg --rec false\r\n```\r\n\r\nOutput will be a list, each item only contains bounding box\r\n```bash\r\n[[397.0, 802.0], [1092.0, 802.0], [1092.0, 841.0], [397.0, 841.0]]\r\n[[397.0, 750.0], [1211.0, 750.0], [1211.0, 789.0], [397.0, 789.0]]\r\n[[397.0, 702.0], [1209.0, 698.0], [1209.0, 734.0], [397.0, 738.0]]\r\n......\r\n```\r\n\r\n* only recognition\r\n```bash\r\npaddleocr --image_dir PaddleOCR/doc/imgs_words_en/word_10.png --det false --lang en\r\n```\r\n\r\nOutput will be a list, each item contains text and recognition confidence\r\n```bash\r\n['PAIN', 0.9934559464454651]\r\n```\r\n\r\n* only classification\r\n```bash\r\npaddleocr --image_dir PaddleOCR/doc/imgs_words_en/word_10.png --use_angle_cls true --det false --rec false\r\n```\r\n\r\nOutput will be a list, each item contains classification result and confidence\r\n```bash\r\n['0', 0.99999964]\r\n```\r\n\r\n## 3 Use custom model\r\nWhen the built-in model cannot meet the needs, you need to use your own trained model.\r\nFirst, refer to [export](./detection_en.md#4-inference) doc to convert your det and rec model to inference model, and then use it as follows\r\n\r\n### 3.1 Use by code\r\n\r\n```python\r\nfrom paddleocr import PaddleOCR,draw_ocr\r\n# The path of detection and recognition model must contain model and params files\r\nocr = PaddleOCR(det_model_dir='{your_det_model_dir}', rec_model_dir='{your_rec_model_dir}', rec_char_dict_path='{your_rec_char_dict_path}', cls_model_dir='{your_cls_model_dir}', use_angle_cls=True)\r\nimg_path = 'PaddleOCR/doc/imgs_en/img_12.jpg'\r\nresult = ocr.ocr(img_path, cls=True)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n\r\n# draw result\r\nfrom PIL import Image\r\nresult = result[0]\r\nimage = Image.open(img_path).convert('RGB')\r\nboxes = [line[0] for line in result]\r\ntxts = [line[1][0] for line in result]\r\nscores = [line[1][1] for line in result]\r\nim_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')\r\nim_show = Image.fromarray(im_show)\r\nim_show.save('result.jpg')\r\n```\r\n\r\n### 3.2 Use by command line\r\n\r\n```bash\r\npaddleocr --image_dir PaddleOCR/doc/imgs/11.jpg --det_model_dir {your_det_model_dir} --rec_model_dir {your_rec_model_dir} --rec_char_dict_path {your_rec_char_dict_path} --cls_model_dir {your_cls_model_dir} --use_angle_cls true\r\n```\r\n\r\n## 4 Use web images or numpy array as input\r\n\r\n### 4.1 Web image\r\n\r\n- Use by code\r\n```python\r\nfrom paddleocr import PaddleOCR, draw_ocr\r\nocr = PaddleOCR(use_angle_cls=True, lang=\"ch\") # need to run only once to download and load model into memory\r\nimg_path = 'http://n.sinaimg.cn/ent/transform/w630h933/20171222/o111-fypvuqf1838418.jpg'\r\nresult = ocr.ocr(img_path, cls=True)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n\r\n# show result\r\nfrom PIL import Image\r\nresult = result[0]\r\nimage = Image.open(img_path).convert('RGB')\r\nboxes = [line[0] for line in result]\r\ntxts = [line[1][0] for line in result]\r\nscores = [line[1][1] for line in result]\r\nim_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')\r\nim_show = Image.fromarray(im_show)\r\nim_show.save('result.jpg')\r\n```\r\n- Use by command line\r\n```bash\r\npaddleocr --image_dir http://n.sinaimg.cn/ent/transform/w630h933/20171222/o111-fypvuqf1838418.jpg --use_angle_cls=true\r\n```\r\n\r\n### 4.2 Numpy array\r\nSupport numpy array as input only when used by code\r\n\r\n```python\r\nimport cv2\r\nfrom paddleocr import PaddleOCR, draw_ocr, download_with_progressbar\r\nocr = PaddleOCR(use_angle_cls=True, lang=\"ch\") # need to run only once to download and load model into memory\r\nimg_path = 'PaddleOCR/doc/imgs/11.jpg'\r\nimg = cv2.imread(img_path)\r\n# img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY), If your own training model supports grayscale images, you can uncomment this line\r\nresult = ocr.ocr(img, cls=True)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n\r\n# show result\r\nfrom PIL import Image\r\nresult = result[0]\r\ndownload_with_progressbar(img_path, 'tmp.jpg')\r\nimage = Image.open('tmp.jpg').convert('RGB')\r\nboxes = [line[0] for line in result]\r\ntxts = [line[1][0] for line in result]\r\nscores = [line[1][1] for line in result]\r\nim_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/fonts/simfang.ttf')\r\nim_show = Image.fromarray(im_show)\r\nim_show.save('result.jpg')\r\n```\r\n## 5 PDF file\r\n- Use by command line\r\n\r\nyou can infer the first few pages by using the `page_num` parameter, the default is 0, which means infer all pages\r\n```bash\r\npaddleocr --image_dir ./xxx.pdf --use_angle_cls true --use_gpu false --page_num 2\r\n```\r\n- Use by code\r\n\r\n```python\r\nfrom paddleocr import PaddleOCR, draw_ocr\r\n\r\n# Paddleocr supports Chinese, English, French, German, Korean and Japanese.\r\n# You can set the parameter `lang` as `ch`, `en`, `fr`, `german`, `korean`, `japan`\r\n# to switch the language model in order.\r\nocr = PaddleOCR(use_angle_cls=True, lang=\"ch\"\uff0c page_num=2)  # need to run only once to download and load model into memory\r\nimg_path = './xxx.pdf'\r\nresult = ocr.ocr(img_path, cls=True)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    for line in res:\r\n        print(line)\r\n\r\n# draw result\r\nimport fitz\r\nfrom PIL import Image\r\nimport cv2\r\nimport numpy as np\r\nimgs = []\r\nwith fitz.open(img_path) as pdf:\r\n    for pg in range(0, pdf.pageCount):\r\n        page = pdf[pg]\r\n        mat = fitz.Matrix(2, 2)\r\n        pm = page.getPixmap(matrix=mat, alpha=False)\r\n        # if width or height > 2000 pixels, don't enlarge the image\r\n        if pm.width > 2000 or pm.height > 2000:\r\n            pm = page.getPixmap(matrix=fitz.Matrix(1, 1), alpha=False)\r\n\r\n        img = Image.frombytes(\"RGB\", [pm.width, pm.height], pm.samples)\r\n        img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)\r\n        imgs.append(img)\r\nfor idx in range(len(result)):\r\n    res = result[idx]\r\n    image = imgs[idx]\r\n    boxes = [line[0] for line in res]\r\n    txts = [line[1][0] for line in res]\r\n    scores = [line[1][1] for line in res]\r\n    im_show = draw_ocr(image, boxes, txts, scores, font_path='doc/fonts/simfang.ttf')\r\n    im_show = Image.fromarray(im_show)\r\n    im_show.save('result_page_{}.jpg'.format(idx))\r\n```\r\n\r\n## 6 Parameter Description\r\n\r\n| Parameter                    | Description                                                                                                                                                                                                                 | Default value                  |\r\n|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|\r\n| use_gpu                 | use GPU or not                                                                                                                                                                                                          | TRUE                    |\r\n| gpu_mem                 | GPU memory size used for initialization                                                                                                                                                                                              | 8000M                   |\r\n| image_dir               | The images path or folder path for predicting when used by the command line                                                                                                                                                                           |                         |\r\n| page_num               | Valid when the input type is pdf file, specify to predict the previous page_num pages, all pages are predicted by default                                                                                                                                                                           |          0               |\r\n| det_algorithm           | Type of detection algorithm selected                                                                                                                                                                                                   | DB                      |\r\n| det_model_dir           | the text detection inference model folder. There are two ways to transfer parameters, 1. None: Automatically download the built-in model to `~/.paddleocr/det`; 2. The path of the inference model converted by yourself, the model and params files must be included in the model path | None           |\r\n| det_max_side_len        | The maximum size of the long side of the image. When the long side exceeds this value, the long side will be resized to this size, and the short side will be scaled proportionally                                                                                                                         | 960                     |\r\n| det_db_thresh           | Binarization threshold value of DB output map                                                                                                                                                                                        | 0.3                     |\r\n| det_db_box_thresh       | The threshold value of the DB output box. Boxes score lower than this value will be discarded                                                                                                                                                                         | 0.5                     |\r\n| det_db_unclip_ratio     | The expanded ratio of DB output box                                                                                                                                                                                             | 2                       |\r\n| det_db_score_mode |  The parameter that control how the score of the detection frame is calculated. There are 'fast' and 'slow' options. If the text to be detected is curved, it is recommended to use 'slow'  | 'fast' |\r\n| det_east_score_thresh   | Binarization threshold value of EAST output map                                                                                                                                                                                       | 0.8                     |\r\n| det_east_cover_thresh   | The threshold value of the EAST output box. Boxes score lower than this value will be discarded                                                                                                                                                                         | 0.1                     |\r\n| det_east_nms_thresh     | The NMS threshold value of EAST model output box                                                                                                                                                                                              | 0.2                     |\r\n| rec_algorithm           | Type of recognition algorithm selected                                                                                                                                                                                                | CRNN                    |\r\n| rec_model_dir           | the text recognition inference model folder. There are two ways to transfer parameters, 1. None: Automatically download the built-in model to `~/.paddleocr/rec`; 2. The path of the inference model converted by yourself, the model and params files must be included in the model path | None |\r\n| rec_image_shape         | image shape of recognition algorithm                                                                                                                                                                                            | \"3,32,320\"              |\r\n| rec_batch_num           | When performing recognition, the batchsize of forward images                                                                                                                                                                                         | 30                      |\r\n| max_text_length         | The maximum text length that the recognition algorithm can recognize                                                                                                                                                                                         | 25                      |\r\n| rec_char_dict_path      | the alphabet path which needs to be modified to your own path when `rec_model_Name` use mode 2                                                                                                                                              | ./ppocr/utils/ppocr_keys_v1.txt                        |\r\n| use_space_char          | Whether to recognize spaces                                                                                                                                                                                                         | TRUE                    |\r\n| drop_score          | Filter the output by score (from the recognition model), and those below this score will not be returned                                                                                                                                                                                                        | 0.5                    |\r\n| use_angle_cls          | Whether to load classification model                                                                                                                                                                                                       | FALSE                    |\r\n| cls_model_dir           | the classification inference model folder. There are two ways to transfer parameters, 1. None: Automatically download the built-in model to `~/.paddleocr/cls`; 2. The path of the inference model converted by yourself, the model and params files must be included in the model path | None |\r\n| cls_image_shape         | image shape of classification algorithm                                                                                                                                                                                            | \"3,48,192\"              |\r\n| label_list         | label list of classification algorithm                                                                                                                                                                                            | ['0','180']           |\r\n| cls_batch_num           | When performing classification, the batchsize of forward images                                                                                                                                                                                         | 30                      |\r\n| enable_mkldnn           | Whether to enable mkldnn                                                                                                                                                                                                       | FALSE                   |\r\n| use_zero_copy_run           | Whether to forward by zero_copy_run                                                                                                                                                                               | FALSE                   |\r\n| lang                     | The support language, now only Chinese(ch)\u3001English(en)\u3001French(french)\u3001German(german)\u3001Korean(korean)\u3001Japanese(japan) are supported                                                                                                                                                                                                  | ch                    |\r\n| det                     | Enable detction when `ppocr.ocr` func exec                                                                                                                                                                                                   | TRUE                    |\r\n| rec                     | Enable recognition when `ppocr.ocr` func exec                                                                                                                                                                                                   | TRUE                    |\r\n| cls                     | Enable classification when `ppocr.ocr` func exec((Use use_angle_cls in command line mode to control whether to start classification in the forward direction)                                                                                                                                                                                                   | FALSE                    |\r\n| show_log                     | Whether to print log| FALSE                    |\r\n| type                     | Perform ocr or table structuring, the value is selected in ['ocr','structure']                                                                                                                                                                                             | ocr                    |\r\n| ocr_version                     | OCR Model version number, the current model support list is as follows: PP-OCRv3 supports Chinese and English detection, recognition, multilingual recognition, direction classifier models, PP-OCRv2 support Chinese detection and recognition model, PP-OCR support Chinese detection, recognition and direction classifier, multilingual recognition model | PP-OCRv3                 |\r\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Awesome OCR toolkits based on PaddlePaddle (8.6M ultra-lightweight pre-trained model, support training and deployment among server, mobile, embeded and IoT devices)",
    "version": "2.7.0.3.8",
    "project_urls": {
        "Download": "https://github.com/PaddlePaddle/PaddleOCR.git",
        "Homepage": "https://github.com/PaddlePaddle/PaddleOCR"
    },
    "split_keywords": [
        "ocr",
        "textdetection",
        "textrecognition",
        "paddleocr",
        "crnn",
        "east",
        "star-net",
        "rosetta",
        "ocrlite",
        "db",
        "chineseocr",
        "chinesetextdetection",
        "chinesetextrecognition"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "738d96cb7789eec74ef37f05b051b23ae4a34cd58e8e1801e84e2fbb2a21c6e3",
                "md5": "241ed0f264184642d0da1a779b3debcc",
                "sha256": "027e68b7f168d28c586c9b019cdf5a196746c0fe8e6387cda73542461cd164bb"
            },
            "downloads": -1,
            "filename": "fadoudou2-2.7.0.3.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "241ed0f264184642d0da1a779b3debcc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 854346,
            "upload_time": "2024-04-01T09:38:55",
            "upload_time_iso_8601": "2024-04-01T09:38:55.709081Z",
            "url": "https://files.pythonhosted.org/packages/73/8d/96cb7789eec74ef37f05b051b23ae4a34cd58e8e1801e84e2fbb2a21c6e3/fadoudou2-2.7.0.3.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "05f8bc025bec95658a150d546b8c1e5b99031635760566353cadfee6d81181b5",
                "md5": "7d2cc21bb166fcdc10b4de99669bd058",
                "sha256": "3f7adcf8c0a2518b4d983e091656a4287fefb022452bbe3f98e35f30f4a4e845"
            },
            "downloads": -1,
            "filename": "fadoudou2-2.7.0.3.8.tar.gz",
            "has_sig": false,
            "md5_digest": "7d2cc21bb166fcdc10b4de99669bd058",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 748079,
            "upload_time": "2024-04-01T09:38:58",
            "upload_time_iso_8601": "2024-04-01T09:38:58.359196Z",
            "url": "https://files.pythonhosted.org/packages/05/f8/bc025bec95658a150d546b8c1e5b99031635760566353cadfee6d81181b5/fadoudou2-2.7.0.3.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-01 09:38:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "PaddlePaddle",
    "github_project": "PaddleOCR",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "fadoudou2"
}
        
Elapsed time: 0.22602s