RemarkableOCR


NameRemarkableOCR JSON
Version 2024.9.2 PyPI version JSON
download
home_pageNone
SummaryRemarkableOCR is a simple ocr tool with improved data, analytics, and rendering tools.
upload_time2024-09-17 14:57:22
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords remarkable ocr optical character recognition machine learning computer vision books
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## RemarkableOCR is a simple ocr tool with improved data, analytics, and rendering tools.

RemarkableOCR creates Image-to-Text positional data and analytics for natural language processing on images. 
RemarkableOCR is based on the Google pytesseract package with additional lightweight processing to make
its **more user-friendly and expansive data**, plus provides one-line simple tools for:
- especially **books**, newspapers, screenshots
- images to **debug**
- **highlights** and **in-doc search**
- **typographical** analysis and **hand-written** annotations.
- and **redaction**.

### installation
```
pip install RemarkableOCR
```


### five-minute demo: data, debug

![demo.data.png](remarkable%2F_db%2Fdocs%2Fdemo.data.png)

```python
from remarkable import RemarkableOCR
from PIL import Image

# Operation Moonglow; annotated by David Bernat
image_filename = "_db/docs/moonglow.jpg"
im = Image.open(image_filename)

##################################################################
#  using data
##################################################################
data = RemarkableOCR.ocr(image_filename)

# we can debug using an image
RemarkableOCR.create_debug_image(im, data).show()

# hey. what are all the c words?
cwords = [d for d in data if "sea" in d["text"].lower()]
cwords = RemarkableOCR.create_debug_image(im, cwords).show()

# nevermind; apply filters because this is a book page
# removes annotations on the edges; which are often numerous
data = RemarkableOCR.filter_assumption_blocks_of_text(data)
margins = [d for d in data if d["is_first_in_line"] or d["is_last_in_line"]]
RemarkableOCR.create_debug_image(im, margins).show()

# transforms data to a space-separated string; adding new-lines at paragraph breaks.
readable = RemarkableOCR.readable_lines(data)
```


### five-minute demo: highlighting

![demo.highlighting.jpg](remarkable%2F_db%2Fdocs%2Fdemo.highlighting.jpg)

```python
from remarkable import RemarkableOCR, colors
from PIL import Image

# Operation Moonglow; annotated by David Bernat
image_filename = "_db/docs/moonglow.jpg"
im = Image.open(image_filename)

##################################################################
#  using data
##################################################################
data = RemarkableOCR.ocr(image_filename)
data = RemarkableOCR.filter_assumption_blocks_of_text(data)

# to create a highlight bar based on token pixel sizes
# if None will calculate on max/min height of the sequence
base = RemarkableOCR.document_statistics(data)
wm, ws = base["char"]["wm"], base["char"]["ws"]
height_px = wm + 6 * ws

# simple search for phrases (lowercase, punctuation removed) returns one result for each four
phrases = ["the Space Age", "US Information Agency", "US State Department", "Neil Armstrong"]
found = RemarkableOCR.find_statements(phrases, data)

# we can highlight these using custom highlights
configs = [dict(highlight_color=colors.starlight),
           dict(highlight_color=colors.green),
           dict(highlight_color=colors.starlight),
           dict(highlight_color=colors.orange, highlight_alpha=0.40),
           ]

highlight = RemarkableOCR.highlight_statements(im, found, data, configs, height_px=height_px)
highlight.show()

# we can redact our secret activities shh :)
phrases = ["I spent the summer reading memos, reports, letters"]
found = RemarkableOCR.find_statements(phrases, data)
config = dict(highlight_color=colors.black, highlight_alpha=1.0)
RemarkableOCR.highlight_statements(highlight, found, data, config, height_px=height_px).show()
```

### what is all this data? 

| key  | value      | ours | r&d | description                                                                          |
|:-----|:-----------|:-----|:---|:-------------------------------------------------------------------------------------|
|text| US         |      | | the token text, whitespace removed                                                   |
|conf| 0.96541046 |      |  | confidence score 0 to 1; 0.40 and up is reliable                                     |
|page_num| 1          |      |  | page number will always be 1 using single images                                     |
|block_num| 13         |      |  | a page consists of blocks top to bottom, 1 at top                                    |
|par_num| 1          |      |  | a block consists of paragraphs top to bottom, 1 at top of block                      |
|line_num| 3          |      |  | a paragraph consists of lines top to bottom, 1 at top of paragraph                   |
|word_num| 6          |      |  | a line consists of words left to right, 1 at the far left                            |
|absolute_line_number| 26         | *    |  | line number relative to page as a whole                                              |
|is_first_in_line| False      | *    |  | is the token the left-most in the line?                                              |
|is_last_in_line| False      | *    |  | is the token the right-most in the line?                                             |
|is_punct| False      | *    |  | is every character a punctuation character?                                          |
|is_alnum| True       | *    |  | is every character alphanumeric?                                                     |
|left| 1160.0     |      |  | left-edge pixel value of token bounding box                                          | 
|right| 1238.0     | *    |  | right-edge pixel value of token bounding box                                         |
|top| 2590.0     |      |  | top-edge pixel value of token bounding box                                           |
|bottom| 2638.0     | *    |  | bottom-edge pixel value of token bounding box                                        |
|width| 78.0       |      |  | width pixel value of token bounding box, equal to right minus left                   |
|height| 48.0       |      |  | height pixel value of token bounding box; equal to bottom minus top                  |
|font_size_pt|36.0| * | | simple approximation of font size in pts using 16px = 12pt standard from height      |
|amt_above_x_height| 1.0        |* |* | does character font typically extend above typographical x_height (yes=1.0, no=0.0)  |
|amt_below_baseline| 0.0        |* |* | does character font typically extend below typographical baseline (yes=1.0, no=0.0)  |
|is_highlighted|True|* |* | statistical estimation as to whether the word is underlined by ink or otherwise|
|has_unknown_char|False|*| | whether token contains a character not in our preassigned typography lists           |
|block_left| 116.0      | *    |  | left-edge of block of token; useful for fixed-width cross-line highlighting          |
|block_right| 2195.0     | *    |  | right-edge of block of token; useful for fixed-width cross-line highlighting         |
|level| 5          |      |  | describes granularity of the token, and will always be 5, indicating a token         |

## RemarkableOCR methods to notice

```python
from remarkable import RemarkableOCR
from PIL import Image

filename = "_db/docs/moonglow.jpg"
data = RemarkableOCR.ocr(filename,
                         confidence_threshold=0.50)  # The core RemarkableOCR functionality returns a dictionary of data about each token detected in the image.
data = RemarkableOCR.filter_assumption_blocks_of_text(data,
                                                      confidence_threshold=0.40)  # a filter for identifying one solid block of text; like a book page or newspaper without ads in between
readable = RemarkableOCR.readable_lines(
    data)  # Convenience function to string sequential words to each line; with new lines at breaks; i.e. readable text
stats = RemarkableOCR.document_statistics(
    data)  # Calculate basic statistics of the document itself; i.e., statistics on the pixel size of the font

im = Image.open(filename)
statements = ["Neil Armstrong"]
debug_im = RemarkableOCR.create_debug_image(im,
                                            data)  # Draws a black bounding box around each token to visually confirm every token was identified correctly.
found = RemarkableOCR.find_statements(statements,
                                      data)  # Uses simple regex to identify exact string matches in sequences of tokens, after string normalization
highlight_im = RemarkableOCR.highlight_statements(im, found, data, config=None,
                                                  height_px=None)  # Convenience function for highlighting multiple sequences found=Array<[_, start_i, end_i]> using custom config.
```

### five-minute demo: research features
These are collections of features and improvements which are not thoroughly tested beyond their narrow demonstration
scope, usually books or newspapers; results should be expected to be unstable for numerous edge cases and these APIs
should be considered moderately unstable, but are also most reactive to user feedback. 

![demo.typographics.png](remarkable%2F_db%2Fdocs%2Fdemo.typographics.png)
```python
from remarkable import RemarkableOCR, RemarkableOCRResearch, plotting
from PIL import Image
import more_itertools
import random

# Operation Moonglow; annotated by David Bernat
image_filename = "_db/docs/moonglow.jpg"
im = Image.open(image_filename)
data = RemarkableOCR.ocr(image_filename)
data = RemarkableOCR.filter_assumption_blocks_of_text(data)

# we can use large reoccurrences of words (about ten sentences) to estimate typographical information about individual
# characters, including their typographical baseline and x_height, and typical dimensions of individual characters.
# this statistical procedure is very robust with, and tested with, mostly uniform text fonts (i.e., book pages).
data, typo = RemarkableOCRResearch.enrich_typographical_statistics(data)
if typo is None: raise RuntimeError("typography failed to converge. please contribute this image to an issue")
RemarkableOCRResearch.create_typography_debug_image(im, data).show()

# we can use computer vision to estimate whether images have handwritten underlining; because the typographical features
# provide very helpful constraints on where underlying occurs this feature is only available when typography converges.
data = RemarkableOCRResearch.enrich_handwritten_features(im, data)
hwords = [d for d in data if d["is_highlighted"]]
RemarkableOCR.create_debug_image(im, hwords).show()

# we can also analyze the specific character instances estimated by typographical features. first we show all letters t.
# second we organize the char_bboxes by character and sort by widest character, choosing a random example of each. third
# we have a little fun by generating arbitrary sentences (not recommended for hostage taking or love letters, please).
# this demo uses a utility that takes a list of images and plots them in a tile grid left to right top to bottom.
t_data = [t for word in typo["char_bboxes"] for t in word if t["char"] == "t"]
images = [im.crop(dct["bbox"]) for dct in t_data]
plotting.tile_images(images, tile_wh=[None, 100], n_width=20).show()

char_boxes_by_char = [t for word in typo["char_bboxes"] for t in word]
char_boxes_by_char = more_itertools.map_reduce(char_boxes_by_char, lambda item: item["char"], lambda item: item["bbox"])
chars_by_width = dict(sorted(typo["font_char_widths"].items(), reverse=True, key=lambda item: item[1])).keys()

random.seed(0)
chars_data = [random.choice(char_boxes_by_char[c]) for c in chars_by_width]
images = [im.crop(bbox) for bbox in chars_data]
plotting.tile_images(images, tile_wh=[None, 100], n_width=11).show()

quote = "Same road, no cars. It's magic."
images = []
for word in quote.split(" "):
    chars_data = [random.choice(char_boxes_by_char[c]) for c in word if c != " "]
    as_images = [im.crop(bbox) for bbox in chars_data]
    images.append(plotting.tile_images(as_images, tile_wh=[None, 100], pad_wh=[0,0], n_width=len(word)))
plotting.tile_images(images, tile_wh=[None, 100], pad_wh=[60, 5], n_width=2).show()
```


### Licensing & Stuff
<div>
<img align="left" width="100" height="100" style="margin-right: 10px" src="remarkable/_db/docs/starlight.logo.icon.improved.png">
Hey. I took time to build this. There are a lot of pain points that I solved for you, and a lot of afternoons staring 
outside the coffeeshop window at the sunshine. Not years, because I am a very skilled, competent software engineer. But
enough, okay? Use this package. Ask for improvements. Integrate this into your products. Complain when it breaks. 
Reference the package by company and name. Starlight Remarkable and RemarkableOCR. Email us to let us know!
</div>


<br /><br /><br />
Starlight LLC <br />
Copyright 2024 <br /> 
All Rights Reserved <br />
GNU GENERAL PUBLIC LICENSE <br />

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "RemarkableOCR",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Starlight <david@starlight.ai>",
    "keywords": "remarkable, ocr, optical character recognition, machine learning, computer vision, books",
    "author": null,
    "author_email": "Starlight <david@starlight.ai>",
    "download_url": "https://files.pythonhosted.org/packages/d8/6b/13f89030081c3215ae9a1e0889a9d6f577a84ec6386ed51e50cc60551fe0/remarkableocr-2024.9.2.tar.gz",
    "platform": null,
    "description": "## RemarkableOCR is a simple ocr tool with improved data, analytics, and rendering tools.\n\nRemarkableOCR creates Image-to-Text positional data and analytics for natural language processing on images. \nRemarkableOCR is based on the Google pytesseract package with additional lightweight processing to make\nits **more user-friendly and expansive data**, plus provides one-line simple tools for:\n- especially **books**, newspapers, screenshots\n- images to **debug**\n- **highlights** and **in-doc search**\n- **typographical** analysis and **hand-written** annotations.\n- and **redaction**.\n\n### installation\n```\npip install RemarkableOCR\n```\n\n\n### five-minute demo: data, debug\n\n![demo.data.png](remarkable%2F_db%2Fdocs%2Fdemo.data.png)\n\n```python\nfrom remarkable import RemarkableOCR\nfrom PIL import Image\n\n# Operation Moonglow; annotated by David Bernat\nimage_filename = \"_db/docs/moonglow.jpg\"\nim = Image.open(image_filename)\n\n##################################################################\n#  using data\n##################################################################\ndata = RemarkableOCR.ocr(image_filename)\n\n# we can debug using an image\nRemarkableOCR.create_debug_image(im, data).show()\n\n# hey. what are all the c words?\ncwords = [d for d in data if \"sea\" in d[\"text\"].lower()]\ncwords = RemarkableOCR.create_debug_image(im, cwords).show()\n\n# nevermind; apply filters because this is a book page\n# removes annotations on the edges; which are often numerous\ndata = RemarkableOCR.filter_assumption_blocks_of_text(data)\nmargins = [d for d in data if d[\"is_first_in_line\"] or d[\"is_last_in_line\"]]\nRemarkableOCR.create_debug_image(im, margins).show()\n\n# transforms data to a space-separated string; adding new-lines at paragraph breaks.\nreadable = RemarkableOCR.readable_lines(data)\n```\n\n\n### five-minute demo: highlighting\n\n![demo.highlighting.jpg](remarkable%2F_db%2Fdocs%2Fdemo.highlighting.jpg)\n\n```python\nfrom remarkable import RemarkableOCR, colors\nfrom PIL import Image\n\n# Operation Moonglow; annotated by David Bernat\nimage_filename = \"_db/docs/moonglow.jpg\"\nim = Image.open(image_filename)\n\n##################################################################\n#  using data\n##################################################################\ndata = RemarkableOCR.ocr(image_filename)\ndata = RemarkableOCR.filter_assumption_blocks_of_text(data)\n\n# to create a highlight bar based on token pixel sizes\n# if None will calculate on max/min height of the sequence\nbase = RemarkableOCR.document_statistics(data)\nwm, ws = base[\"char\"][\"wm\"], base[\"char\"][\"ws\"]\nheight_px = wm + 6 * ws\n\n# simple search for phrases (lowercase, punctuation removed) returns one result for each four\nphrases = [\"the Space Age\", \"US Information Agency\", \"US State Department\", \"Neil Armstrong\"]\nfound = RemarkableOCR.find_statements(phrases, data)\n\n# we can highlight these using custom highlights\nconfigs = [dict(highlight_color=colors.starlight),\n           dict(highlight_color=colors.green),\n           dict(highlight_color=colors.starlight),\n           dict(highlight_color=colors.orange, highlight_alpha=0.40),\n           ]\n\nhighlight = RemarkableOCR.highlight_statements(im, found, data, configs, height_px=height_px)\nhighlight.show()\n\n# we can redact our secret activities shh :)\nphrases = [\"I spent the summer reading memos, reports, letters\"]\nfound = RemarkableOCR.find_statements(phrases, data)\nconfig = dict(highlight_color=colors.black, highlight_alpha=1.0)\nRemarkableOCR.highlight_statements(highlight, found, data, config, height_px=height_px).show()\n```\n\n### what is all this data? \n\n| key  | value      | ours | r&d | description                                                                          |\n|:-----|:-----------|:-----|:---|:-------------------------------------------------------------------------------------|\n|text| US         |      | | the token text, whitespace removed                                                   |\n|conf| 0.96541046 |      |  | confidence score 0 to 1; 0.40 and up is reliable                                     |\n|page_num| 1          |      |  | page number will always be 1 using single images                                     |\n|block_num| 13         |      |  | a page consists of blocks top to bottom, 1 at top                                    |\n|par_num| 1          |      |  | a block consists of paragraphs top to bottom, 1 at top of block                      |\n|line_num| 3          |      |  | a paragraph consists of lines top to bottom, 1 at top of paragraph                   |\n|word_num| 6          |      |  | a line consists of words left to right, 1 at the far left                            |\n|absolute_line_number| 26         | *    |  | line number relative to page as a whole                                              |\n|is_first_in_line| False      | *    |  | is the token the left-most in the line?                                              |\n|is_last_in_line| False      | *    |  | is the token the right-most in the line?                                             |\n|is_punct| False      | *    |  | is every character a punctuation character?                                          |\n|is_alnum| True       | *    |  | is every character alphanumeric?                                                     |\n|left| 1160.0     |      |  | left-edge pixel value of token bounding box                                          | \n|right| 1238.0     | *    |  | right-edge pixel value of token bounding box                                         |\n|top| 2590.0     |      |  | top-edge pixel value of token bounding box                                           |\n|bottom| 2638.0     | *    |  | bottom-edge pixel value of token bounding box                                        |\n|width| 78.0       |      |  | width pixel value of token bounding box, equal to right minus left                   |\n|height| 48.0       |      |  | height pixel value of token bounding box; equal to bottom minus top                  |\n|font_size_pt|36.0| * | | simple approximation of font size in pts using 16px = 12pt standard from height      |\n|amt_above_x_height| 1.0        |* |* | does character font typically extend above typographical x_height (yes=1.0, no=0.0)  |\n|amt_below_baseline| 0.0        |* |* | does character font typically extend below typographical baseline (yes=1.0, no=0.0)  |\n|is_highlighted|True|* |* | statistical estimation as to whether the word is underlined by ink or otherwise|\n|has_unknown_char|False|*| | whether token contains a character not in our preassigned typography lists           |\n|block_left| 116.0      | *    |  | left-edge of block of token; useful for fixed-width cross-line highlighting          |\n|block_right| 2195.0     | *    |  | right-edge of block of token; useful for fixed-width cross-line highlighting         |\n|level| 5          |      |  | describes granularity of the token, and will always be 5, indicating a token         |\n\n## RemarkableOCR methods to notice\n\n```python\nfrom remarkable import RemarkableOCR\nfrom PIL import Image\n\nfilename = \"_db/docs/moonglow.jpg\"\ndata = RemarkableOCR.ocr(filename,\n                         confidence_threshold=0.50)  # The core RemarkableOCR functionality returns a dictionary of data about each token detected in the image.\ndata = RemarkableOCR.filter_assumption_blocks_of_text(data,\n                                                      confidence_threshold=0.40)  # a filter for identifying one solid block of text; like a book page or newspaper without ads in between\nreadable = RemarkableOCR.readable_lines(\n    data)  # Convenience function to string sequential words to each line; with new lines at breaks; i.e. readable text\nstats = RemarkableOCR.document_statistics(\n    data)  # Calculate basic statistics of the document itself; i.e., statistics on the pixel size of the font\n\nim = Image.open(filename)\nstatements = [\"Neil Armstrong\"]\ndebug_im = RemarkableOCR.create_debug_image(im,\n                                            data)  # Draws a black bounding box around each token to visually confirm every token was identified correctly.\nfound = RemarkableOCR.find_statements(statements,\n                                      data)  # Uses simple regex to identify exact string matches in sequences of tokens, after string normalization\nhighlight_im = RemarkableOCR.highlight_statements(im, found, data, config=None,\n                                                  height_px=None)  # Convenience function for highlighting multiple sequences found=Array<[_, start_i, end_i]> using custom config.\n```\n\n### five-minute demo: research features\nThese are collections of features and improvements which are not thoroughly tested beyond their narrow demonstration\nscope, usually books or newspapers; results should be expected to be unstable for numerous edge cases and these APIs\nshould be considered moderately unstable, but are also most reactive to user feedback. \n\n![demo.typographics.png](remarkable%2F_db%2Fdocs%2Fdemo.typographics.png)\n```python\nfrom remarkable import RemarkableOCR, RemarkableOCRResearch, plotting\nfrom PIL import Image\nimport more_itertools\nimport random\n\n# Operation Moonglow; annotated by David Bernat\nimage_filename = \"_db/docs/moonglow.jpg\"\nim = Image.open(image_filename)\ndata = RemarkableOCR.ocr(image_filename)\ndata = RemarkableOCR.filter_assumption_blocks_of_text(data)\n\n# we can use large reoccurrences of words (about ten sentences) to estimate typographical information about individual\n# characters, including their typographical baseline and x_height, and typical dimensions of individual characters.\n# this statistical procedure is very robust with, and tested with, mostly uniform text fonts (i.e., book pages).\ndata, typo = RemarkableOCRResearch.enrich_typographical_statistics(data)\nif typo is None: raise RuntimeError(\"typography failed to converge. please contribute this image to an issue\")\nRemarkableOCRResearch.create_typography_debug_image(im, data).show()\n\n# we can use computer vision to estimate whether images have handwritten underlining; because the typographical features\n# provide very helpful constraints on where underlying occurs this feature is only available when typography converges.\ndata = RemarkableOCRResearch.enrich_handwritten_features(im, data)\nhwords = [d for d in data if d[\"is_highlighted\"]]\nRemarkableOCR.create_debug_image(im, hwords).show()\n\n# we can also analyze the specific character instances estimated by typographical features. first we show all letters t.\n# second we organize the char_bboxes by character and sort by widest character, choosing a random example of each. third\n# we have a little fun by generating arbitrary sentences (not recommended for hostage taking or love letters, please).\n# this demo uses a utility that takes a list of images and plots them in a tile grid left to right top to bottom.\nt_data = [t for word in typo[\"char_bboxes\"] for t in word if t[\"char\"] == \"t\"]\nimages = [im.crop(dct[\"bbox\"]) for dct in t_data]\nplotting.tile_images(images, tile_wh=[None, 100], n_width=20).show()\n\nchar_boxes_by_char = [t for word in typo[\"char_bboxes\"] for t in word]\nchar_boxes_by_char = more_itertools.map_reduce(char_boxes_by_char, lambda item: item[\"char\"], lambda item: item[\"bbox\"])\nchars_by_width = dict(sorted(typo[\"font_char_widths\"].items(), reverse=True, key=lambda item: item[1])).keys()\n\nrandom.seed(0)\nchars_data = [random.choice(char_boxes_by_char[c]) for c in chars_by_width]\nimages = [im.crop(bbox) for bbox in chars_data]\nplotting.tile_images(images, tile_wh=[None, 100], n_width=11).show()\n\nquote = \"Same road, no cars. It's magic.\"\nimages = []\nfor word in quote.split(\" \"):\n    chars_data = [random.choice(char_boxes_by_char[c]) for c in word if c != \" \"]\n    as_images = [im.crop(bbox) for bbox in chars_data]\n    images.append(plotting.tile_images(as_images, tile_wh=[None, 100], pad_wh=[0,0], n_width=len(word)))\nplotting.tile_images(images, tile_wh=[None, 100], pad_wh=[60, 5], n_width=2).show()\n```\n\n\n### Licensing & Stuff\n<div>\n<img align=\"left\" width=\"100\" height=\"100\" style=\"margin-right: 10px\" src=\"remarkable/_db/docs/starlight.logo.icon.improved.png\">\nHey. I took time to build this. There are a lot of pain points that I solved for you, and a lot of afternoons staring \noutside the coffeeshop window at the sunshine. Not years, because I am a very skilled, competent software engineer. But\nenough, okay? Use this package. Ask for improvements. Integrate this into your products. Complain when it breaks. \nReference the package by company and name. Starlight Remarkable and RemarkableOCR. Email us to let us know!\n</div>\n\n\n<br /><br /><br />\nStarlight LLC <br />\nCopyright 2024 <br /> \nAll Rights Reserved <br />\nGNU GENERAL PUBLIC LICENSE <br />\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "RemarkableOCR is a simple ocr tool with improved data, analytics, and rendering tools.",
    "version": "2024.9.2",
    "project_urls": {
        "Homepage": "https://github.com/markelwin/RemarkableOCR",
        "Issues": "https://github.com/markelwin/RemarkableOCR/issues",
        "Repository": "https://github.com/markelwin/RemarkableOCR.git"
    },
    "split_keywords": [
        "remarkable",
        " ocr",
        " optical character recognition",
        " machine learning",
        " computer vision",
        " books"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "80a3a68be7c7a345c0428f4e299ebbf7eca4d168fd1011bb6c8bf08d45f4827c",
                "md5": "5f24511210a9a730daeada67431dddbb",
                "sha256": "dd619aa89b3f5225abeb0f0129c1129f3ae2a1c87cdea1c06bad623db29ebf0e"
            },
            "downloads": -1,
            "filename": "RemarkableOCR-2024.9.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5f24511210a9a730daeada67431dddbb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 31340,
            "upload_time": "2024-09-17T14:57:20",
            "upload_time_iso_8601": "2024-09-17T14:57:20.988646Z",
            "url": "https://files.pythonhosted.org/packages/80/a3/a68be7c7a345c0428f4e299ebbf7eca4d168fd1011bb6c8bf08d45f4827c/RemarkableOCR-2024.9.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d86b13f89030081c3215ae9a1e0889a9d6f577a84ec6386ed51e50cc60551fe0",
                "md5": "43f03b5059bf0dc3b08a7e10d79a7dee",
                "sha256": "4137a5c07fa4f28b9a81fb93633893f109b08cc99ce8ba438520183cabe5b834"
            },
            "downloads": -1,
            "filename": "remarkableocr-2024.9.2.tar.gz",
            "has_sig": false,
            "md5_digest": "43f03b5059bf0dc3b08a7e10d79a7dee",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 29773,
            "upload_time": "2024-09-17T14:57:22",
            "upload_time_iso_8601": "2024-09-17T14:57:22.842453Z",
            "url": "https://files.pythonhosted.org/packages/d8/6b/13f89030081c3215ae9a1e0889a9d6f577a84ec6386ed51e50cc60551fe0/remarkableocr-2024.9.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-17 14:57:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "markelwin",
    "github_project": "RemarkableOCR",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "remarkableocr"
}
        
Elapsed time: 0.52353s