# Marker
Marker converts PDFs to markdown, JSON, and HTML quickly and accurately.
- Supports a wide range of documents
- Supports all languages
- Removes headers/footers/other artifacts
- Formats tables and code blocks
- Extracts and saves images along with the markdown
- Converts equations to latex
- Easily extensible with your own formatting and logic
- Works on GPU, CPU, or MPS
## How it works
Marker is a pipeline of deep learning models:
- Extract text, OCR if necessary (heuristics, [surya](https://github.com/VikParuchuri/surya))
- Detect page layout and find reading order ([surya](https://github.com/VikParuchuri/surya))
- Clean and format each block (heuristics, [texify](https://github.com/VikParuchuri/texify). [tabled](https://github.com/VikParuchuri/tabled))
- Combine blocks and postprocess complete text
It only uses models where necessary, which improves speed and accuracy.
## Examples
| PDF | File type | Markdown | JSON |
|-----|-----------|------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|
| [Think Python](https://greenteapress.com/thinkpython/thinkpython.pdf) | Textbook | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/markdown/thinkpython/thinkpython.md) | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/json/thinkpython.json) |
| [Switch Transformers](https://arxiv.org/pdf/2101.03961.pdf) | arXiv paper | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/markdown/switch_transformers/switch_trans.md) | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/json/switch_trans.json) |
| [Multi-column CNN](https://arxiv.org/pdf/1804.07821.pdf) | arXiv paper | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/markdown/multicolcnn/multicolcnn.md) | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/json/multicolcnn.json) |
## Performance
![Benchmark overall](data/images/overall.png)
The above results are with marker setup so it takes ~7GB of VRAM on an A10.
See [below](#benchmarks) for detailed speed and accuracy benchmarks, and instructions on how to run your own benchmarks.
# Commercial usage
I want marker to be as widely accessible as possible, while still funding my development/training costs. Research and personal usage is always okay, but there are some restrictions on commercial usage.
The weights for the models are licensed `cc-by-nc-sa-4.0`, but I will waive that for any organization under $5M USD in gross revenue in the most recent 12-month period AND under $5M in lifetime VC/angel funding raised. You also must not be competitive with the [Datalab API](https://www.datalab.to/). If you want to remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, check out the options [here](https://www.datalab.to).
# Hosted API
There's a hosted API for marker available [here](https://www.datalab.to/):
- Supports PDFs, word documents, and powerpoints
- 1/4th the price of leading cloud-based competitors
- High uptime (99.99%), quality, and speed (around 15 seconds to convert a 250 page PDF)
# Community
[Discord](https://discord.gg//KuZwXNGnfH) is where we discuss future development.
# Limitations
PDF is a tricky format, so marker will not always work perfectly. Here are some known limitations that are on the roadmap to address:
- Marker will only convert block equations
- Tables are not always formatted 100% correctly - multiline cells are sometimes split into multiple rows.
- Forms are not converted optimally
- Very complex layouts, with nested tables and forms, may not work
# Installation
You'll need python 3.10+ and PyTorch. You may need to install the CPU version of torch first if you're not using a Mac or a GPU machine. See [here](https://pytorch.org/get-started/locally/) for more details.
Install with:
```shell
pip install marker-pdf
```
# Usage
First, some configuration:
- Your torch device will be automatically detected, but you can override this. For example, `TORCH_DEVICE=cuda`.
- Some PDFs, even digital ones, have bad text in them. Set the `force_ocr` flag on the CLI or via configuration to ensure your PDF runs through OCR.
## Interactive App
I've included a streamlit app that lets you interactively try marker with some basic options. Run it with:
```shell
pip install streamlit
marker_gui
```
## Convert a single file
```shell
marker_single /path/to/file.pdf
```
Options:
- `--output_dir PATH`: Directory where output files will be saved. Defaults to the value specified in settings.OUTPUT_DIR.
- `--debug`: Enable debug mode for additional logging and diagnostic information.
- `--output_format [markdown|json|html]`: Specify the format for the output results.
- `--page_range TEXT`: Specify which pages to process. Accepts comma-separated page numbers and ranges. Example: `--page_range "0,5-10,20"` will process pages 0, 5 through 10, and page 20.
- `--force_ocr`: Force OCR processing on the entire document, even for pages that might contain extractable text.
- `--processors TEXT`: Override the default processors by providing their full module paths, separated by commas. Example: `--processors "module1.processor1,module2.processor2"`
- `--config_json PATH`: Path to a JSON configuration file containing additional settings.
- `--languages TEXT`: Optionally specify which languages to use for OCR processing. Accepts a comma-separated list. Example: `--languages "eng,fra,deu"` for English, French, and German.
- `config --help`: List all available builders, processors, and converters, and their associated configuration. These values can be used to build a JSON configuration file for additional tweaking of marker defaults.
The list of supported languages for surya OCR is [here](https://github.com/VikParuchuri/surya/blob/master/surya/languages.py). If you don't need OCR, marker can work with any language.
## Convert multiple files
```shell
marker /path/to/input/folder --workers 4
```
- `marker` supports all the same options from `marker_single` above.
- `--workers` is the number of conversion workers to run simultaneously. This is set to 5 by default, but you can increase it to increase throughput, at the cost of more CPU/GPU usage. Marker will use 5GB of VRAM per worker at the peak, and 3.5GB average.
## Convert multiple files on multiple GPUs
```shell
NUM_DEVICES=4 NUM_WORKERS=15 marker_chunk_convert ../pdf_in ../md_out
```
- `NUM_DEVICES` is the number of GPUs to use. Should be `2` or greater.
- `NUM_WORKERS` is the number of parallel processes to run on each GPU.
-
## Use from python
See the `PdfConverter` class at `marker/converters/pdf.py` function for additional arguments that can be passed.
```python
from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.output import text_from_rendered
converter = PdfConverter(
artifact_dict=create_model_dict(),
)
rendered = converter("FILEPATH")
text, _, images = text_from_rendered(rendered)
```
`rendered` will be a pydantic basemodel with different properties depending on the output type requested. With markdown output (default), you'll have the properties `markdown`, `metadata`, and `images`. For json output, you'll have `children`, `block_type`, and `metadata`.
### Custom configuration
You can also pass configuration using the `ConfigParser`:
```python
from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.config.parser import ConfigParser
config = {
"output_format": "json",
"ADDITIONAL_KEY": "VALUE"
}
config_parser = ConfigParser(config)
converter = PdfConverter(
config=config_parser.generate_config_dict(),
artifact_dict=create_model_dict(),
processor_list=config_parser.get_processors(),
renderer=config_parser.get_renderer()
)
rendered = converter("FILEPATH")
```
# Output Formats
## Markdown
Markdown output will include:
- image links (images will be saved in the same folder)
- formatted tables
- embedded LaTeX equations (fenced with `$$`)
- Code is fenced with triple backticks
- Superscripts for footnotes
## HTML
HTML output is similar to markdown output:
- Images are included via `img` tags
- equations are fenced with `<math>` tags
- code is in `pre` tags
## JSON
JSON output will be organized in a tree-like structure, with the leaf nodes being blocks. Examples of leaf nodes are a single list item, a paragraph of text, or an image.
The output will be a list, with each list item representing a page. Each page is considered a block in the internal marker schema. There are different types of blocks to represent different elements.
Pages have the keys:
- `id` - unique id for the block.
- `block_type` - the type of block. The possible block types can be seen in `marker/schema/__init__.py`. As of this writing, they are ["Line", "Span", "FigureGroup", "TableGroup", "ListGroup", "PictureGroup", "Page", "Caption", "Code", "Figure", "Footnote", "Form", "Equation", "Handwriting", "TextInlineMath", "ListItem", "PageFooter", "PageHeader", "Picture", "SectionHeader", "Table", "Text", "TableOfContents", "Document"]
- `html` - the HTML for the page. Note that this will have recursive references to children. The `content-ref` tags must be replaced with the child content if you want the full html. You can see an example of this at `marker/renderers/__init__.py:BaseRender.extract_block_html`.
- `polygon` - the 4-corner polygon of the page, in (x1,y1), (x2,y2), (x3, y3), (x4, y4) format. (x1,y1) is the top left, and coordinates go clockwise.
- `children` - the child blocks.
The child blocks have two additional keys:
- `section_hierarchy` - indicates the sections that the block is part of. `1` indicates an h1 tag, `2` an h2, and so on.
- `images` - base64 encoded images. The key will be the block id, and the data will be the encoded image.
Note that child blocks of pages can have their own children as well (a tree structure).
```json
{
"id": "/page/10/Page/366",
"block_type": "Page",
"html": "<content-ref src='/page/10/SectionHeader/0'></content-ref><content-ref src='/page/10/SectionHeader/1'></content-ref><content-ref src='/page/10/Text/2'></content-ref><content-ref src='/page/10/Text/3'></content-ref><content-ref src='/page/10/Figure/4'></content-ref><content-ref src='/page/10/SectionHeader/5'></content-ref><content-ref src='/page/10/SectionHeader/6'></content-ref><content-ref src='/page/10/TextInlineMath/7'></content-ref><content-ref src='/page/10/TextInlineMath/8'></content-ref><content-ref src='/page/10/Table/9'></content-ref><content-ref src='/page/10/SectionHeader/10'></content-ref><content-ref src='/page/10/Text/11'></content-ref>",
"polygon": [[0.0, 0.0], [612.0, 0.0], [612.0, 792.0], [0.0, 792.0]],
"children": [
{
"id": "/page/10/SectionHeader/0",
"block_type": "SectionHeader",
"html": "<h1>Supplementary Material for <i>Subspace Adversarial Training</i> </h1>",
"polygon": [
[217.845703125, 80.630859375], [374.73046875, 80.630859375],
[374.73046875, 107.0],
[217.845703125, 107.0]
],
"children": null,
"section_hierarchy": {
"1": "/page/10/SectionHeader/1"
},
"images": {}
},
...
]
}
```
## Metadata
All output formats will return a metadata dictionary, with the following fields:
```json
{
"table_of_contents": [
{
"title": "Introduction",
"heading_level": 1,
"page_id": 0,
"polygon": [...]
}
], // computed PDF table of contents
"page_stats": [
{
"page_id": 0,
"text_extraction_method": "pdftext",
"block_counts": [("Span", 200), ...]
},
...
]
}
```
# Internals
Marker is easy to extend. The core units of marker are:
- `Providers`, at `marker/providers`. These provide information from a source file, like a PDF.
- `Builders`, at `marker/builders`. These generate the initial document blocks and fill in text, using info from the providers.
- `Processors`, at `marker/processors`. These process specific blocks, for example the table formatter is a processor.
- `Renderers`, at `marker/renderers`. These use the blocks to render output.
- `Schema`, at `marker/schema`. The classes for all the block types.
- `Converters`, at `marker/converters`. They run the whole end to end pipeline.
To customize processing behavior, override the `processors`. To add new output formats, write a new `renderer`. For additional input formats, write a new `provider.`
Processors and renderers can be directly passed into the base `PDFConverter`, so you can specify your own custom processing easily.
## API server
There is a very simple API server you can run like this:
```shell
pip install -U uvicorn fastapi python-multipart
marker_server --port 8001
```
This will start a fastapi server that you can access at `localhost:8001`. You can go to `localhost:8001/docs` to see the endpoint options.
You can send requests like this:
```
import requests
import json
post_data = {
'filepath': 'FILEPATH',
# Add other params here
}
requests.post("http://localhost:8001/marker", data=json.dumps(post_data)).json()
```
Note that this is not a very robust API, and is only intended for small-scale use. If you want to use this server, but want a more robust conversion option, you can use the hosted [Datalab API](https://www.datalab.to/plans).
# Troubleshooting
There are some settings that you may find useful if things aren't working the way you expect:
- Make sure to set `force_ocr` if you see garbled text - this will re-OCR the document.
- `TORCH_DEVICE` - set this to force marker to use a given torch device for inference.
- If you're getting out of memory errors, decrease worker count. You can also try splitting up long PDFs into multiple files.
## Debugging
Pass the `debug` option to activate debug mode. This will save images of each page with detected layout and text, as well as output a json file with additional bounding box information.
# Benchmarks
Benchmarking PDF extraction quality is hard. I've created a test set by finding books and scientific papers that have a pdf version and a latex source. I convert the latex to text, and compare the reference to the output of text extraction methods. It's noisy, but at least directionally correct.
**Speed**
| Method | Average Score | Time per page | Time per document |
|---------|----------------|---------------|------------------|
| marker | 0.625115 | 0.234184 | 21.545 |
**Accuracy**
| Method | thinkpython.pdf | switch_trans.pdf | thinkdsp.pdf | crowd.pdf | thinkos.pdf | multicolcnn.pdf |
|---------|----------------|-----------------|--------------|------------|-------------|----------------|
| marker | 0.720347 | 0.592002 | 0.70468 | 0.515082 | 0.701394 | 0.517184 |
Peak GPU memory usage during the benchmark is `6GB` for marker. Benchmarks were run on an A10.
**Throughput**
Marker takes about 6GB of VRAM on average per task, so you can convert 8 documents in parallel on an A6000.
![Benchmark results](data/images/per_doc.png)
## Running your own benchmarks
You can benchmark the performance of marker on your machine. Install marker manually with:
```shell
git clone https://github.com/VikParuchuri/marker.git
poetry install
```
Download the benchmark data [here](https://drive.google.com/file/d/1ZSeWDo2g1y0BRLT7KnbmytV2bjWARWba/view?usp=sharing) and unzip. Then run the overall benchmark like this:
```shell
python benchmarks/overall.py data/pdfs data/references report.json
```
# Thanks
This work would not have been possible without amazing open source models and datasets, including (but not limited to):
- Surya
- Texify
- Pypdfium2/pdfium
- DocLayNet from IBM
Thank you to the authors of these models and datasets for making them available to the community!
Raw data
{
"_id": null,
"home_page": "https://github.com/VikParuchuri/marker",
"name": "marker-pdf",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "pdf, markdown, ocr, nlp",
"author": "Vik Paruchuri",
"author_email": "github@vikas.sh",
"download_url": "https://files.pythonhosted.org/packages/79/d4/cb2c75cc3494578e97ce5d9a97c6e134f71d019b8c4c6f11fa31bb066f10/marker_pdf-1.1.0.tar.gz",
"platform": null,
"description": "# Marker\n\nMarker converts PDFs to markdown, JSON, and HTML quickly and accurately.\n\n- Supports a wide range of documents\n- Supports all languages\n- Removes headers/footers/other artifacts\n- Formats tables and code blocks\n- Extracts and saves images along with the markdown\n- Converts equations to latex\n- Easily extensible with your own formatting and logic\n- Works on GPU, CPU, or MPS\n\n## How it works\n\nMarker is a pipeline of deep learning models:\n\n- Extract text, OCR if necessary (heuristics, [surya](https://github.com/VikParuchuri/surya))\n- Detect page layout and find reading order ([surya](https://github.com/VikParuchuri/surya))\n- Clean and format each block (heuristics, [texify](https://github.com/VikParuchuri/texify). [tabled](https://github.com/VikParuchuri/tabled))\n- Combine blocks and postprocess complete text\n\nIt only uses models where necessary, which improves speed and accuracy.\n\n## Examples\n\n| PDF | File type | Markdown | JSON |\n|-----|-----------|------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|\n| [Think Python](https://greenteapress.com/thinkpython/thinkpython.pdf) | Textbook | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/markdown/thinkpython/thinkpython.md) | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/json/thinkpython.json) |\n| [Switch Transformers](https://arxiv.org/pdf/2101.03961.pdf) | arXiv paper | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/markdown/switch_transformers/switch_trans.md) | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/json/switch_trans.json) |\n| [Multi-column CNN](https://arxiv.org/pdf/1804.07821.pdf) | arXiv paper | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/markdown/multicolcnn/multicolcnn.md) | [View](https://github.com/VikParuchuri/marker/blob/master/data/examples/json/multicolcnn.json) |\n\n## Performance\n\n![Benchmark overall](data/images/overall.png)\n\nThe above results are with marker setup so it takes ~7GB of VRAM on an A10.\n\nSee [below](#benchmarks) for detailed speed and accuracy benchmarks, and instructions on how to run your own benchmarks.\n\n# Commercial usage\n\nI want marker to be as widely accessible as possible, while still funding my development/training costs. Research and personal usage is always okay, but there are some restrictions on commercial usage.\n\nThe weights for the models are licensed `cc-by-nc-sa-4.0`, but I will waive that for any organization under $5M USD in gross revenue in the most recent 12-month period AND under $5M in lifetime VC/angel funding raised. You also must not be competitive with the [Datalab API](https://www.datalab.to/). If you want to remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, check out the options [here](https://www.datalab.to).\n\n# Hosted API\n\nThere's a hosted API for marker available [here](https://www.datalab.to/):\n\n- Supports PDFs, word documents, and powerpoints \n- 1/4th the price of leading cloud-based competitors\n- High uptime (99.99%), quality, and speed (around 15 seconds to convert a 250 page PDF)\n\n# Community\n\n[Discord](https://discord.gg//KuZwXNGnfH) is where we discuss future development.\n\n# Limitations\n\nPDF is a tricky format, so marker will not always work perfectly. Here are some known limitations that are on the roadmap to address:\n\n- Marker will only convert block equations\n- Tables are not always formatted 100% correctly - multiline cells are sometimes split into multiple rows.\n- Forms are not converted optimally\n- Very complex layouts, with nested tables and forms, may not work\n\n# Installation\n\nYou'll need python 3.10+ and PyTorch. You may need to install the CPU version of torch first if you're not using a Mac or a GPU machine. See [here](https://pytorch.org/get-started/locally/) for more details.\n\nInstall with:\n\n```shell\npip install marker-pdf\n```\n\n# Usage\n\nFirst, some configuration:\n\n- Your torch device will be automatically detected, but you can override this. For example, `TORCH_DEVICE=cuda`.\n- Some PDFs, even digital ones, have bad text in them. Set the `force_ocr` flag on the CLI or via configuration to ensure your PDF runs through OCR.\n\n## Interactive App\n\nI've included a streamlit app that lets you interactively try marker with some basic options. Run it with:\n\n```shell\npip install streamlit\nmarker_gui\n```\n\n## Convert a single file\n\n```shell\nmarker_single /path/to/file.pdf\n```\n\nOptions:\n- `--output_dir PATH`: Directory where output files will be saved. Defaults to the value specified in settings.OUTPUT_DIR.\n- `--debug`: Enable debug mode for additional logging and diagnostic information.\n- `--output_format [markdown|json|html]`: Specify the format for the output results.\n- `--page_range TEXT`: Specify which pages to process. Accepts comma-separated page numbers and ranges. Example: `--page_range \"0,5-10,20\"` will process pages 0, 5 through 10, and page 20.\n- `--force_ocr`: Force OCR processing on the entire document, even for pages that might contain extractable text.\n- `--processors TEXT`: Override the default processors by providing their full module paths, separated by commas. Example: `--processors \"module1.processor1,module2.processor2\"`\n- `--config_json PATH`: Path to a JSON configuration file containing additional settings.\n- `--languages TEXT`: Optionally specify which languages to use for OCR processing. Accepts a comma-separated list. Example: `--languages \"eng,fra,deu\"` for English, French, and German.\n- `config --help`: List all available builders, processors, and converters, and their associated configuration. These values can be used to build a JSON configuration file for additional tweaking of marker defaults.\n\nThe list of supported languages for surya OCR is [here](https://github.com/VikParuchuri/surya/blob/master/surya/languages.py). If you don't need OCR, marker can work with any language.\n\n## Convert multiple files\n\n```shell\nmarker /path/to/input/folder --workers 4\n```\n\n- `marker` supports all the same options from `marker_single` above.\n- `--workers` is the number of conversion workers to run simultaneously. This is set to 5 by default, but you can increase it to increase throughput, at the cost of more CPU/GPU usage. Marker will use 5GB of VRAM per worker at the peak, and 3.5GB average.\n\n## Convert multiple files on multiple GPUs\n\n```shell\nNUM_DEVICES=4 NUM_WORKERS=15 marker_chunk_convert ../pdf_in ../md_out\n```\n\n- `NUM_DEVICES` is the number of GPUs to use. Should be `2` or greater.\n- `NUM_WORKERS` is the number of parallel processes to run on each GPU.\n- \n\n## Use from python\n\nSee the `PdfConverter` class at `marker/converters/pdf.py` function for additional arguments that can be passed.\n\n```python\nfrom marker.converters.pdf import PdfConverter\nfrom marker.models import create_model_dict\nfrom marker.output import text_from_rendered\n\nconverter = PdfConverter(\n artifact_dict=create_model_dict(),\n)\nrendered = converter(\"FILEPATH\")\ntext, _, images = text_from_rendered(rendered)\n```\n\n`rendered` will be a pydantic basemodel with different properties depending on the output type requested. With markdown output (default), you'll have the properties `markdown`, `metadata`, and `images`. For json output, you'll have `children`, `block_type`, and `metadata`.\n\n### Custom configuration\n\nYou can also pass configuration using the `ConfigParser`:\n\n```python\nfrom marker.converters.pdf import PdfConverter\nfrom marker.models import create_model_dict\nfrom marker.config.parser import ConfigParser\n\nconfig = {\n \"output_format\": \"json\",\n \"ADDITIONAL_KEY\": \"VALUE\"\n}\nconfig_parser = ConfigParser(config)\n\nconverter = PdfConverter(\n config=config_parser.generate_config_dict(),\n artifact_dict=create_model_dict(),\n processor_list=config_parser.get_processors(),\n renderer=config_parser.get_renderer()\n)\nrendered = converter(\"FILEPATH\")\n```\n\n# Output Formats\n\n## Markdown\n\nMarkdown output will include:\n\n- image links (images will be saved in the same folder)\n- formatted tables\n- embedded LaTeX equations (fenced with `$$`)\n- Code is fenced with triple backticks\n- Superscripts for footnotes\n\n## HTML\n\nHTML output is similar to markdown output:\n\n- Images are included via `img` tags\n- equations are fenced with `<math>` tags\n- code is in `pre` tags\n\n## JSON\n\nJSON output will be organized in a tree-like structure, with the leaf nodes being blocks. Examples of leaf nodes are a single list item, a paragraph of text, or an image.\n\nThe output will be a list, with each list item representing a page. Each page is considered a block in the internal marker schema. There are different types of blocks to represent different elements. \n\nPages have the keys:\n\n- `id` - unique id for the block.\n- `block_type` - the type of block. The possible block types can be seen in `marker/schema/__init__.py`. As of this writing, they are [\"Line\", \"Span\", \"FigureGroup\", \"TableGroup\", \"ListGroup\", \"PictureGroup\", \"Page\", \"Caption\", \"Code\", \"Figure\", \"Footnote\", \"Form\", \"Equation\", \"Handwriting\", \"TextInlineMath\", \"ListItem\", \"PageFooter\", \"PageHeader\", \"Picture\", \"SectionHeader\", \"Table\", \"Text\", \"TableOfContents\", \"Document\"]\n- `html` - the HTML for the page. Note that this will have recursive references to children. The `content-ref` tags must be replaced with the child content if you want the full html. You can see an example of this at `marker/renderers/__init__.py:BaseRender.extract_block_html`.\n- `polygon` - the 4-corner polygon of the page, in (x1,y1), (x2,y2), (x3, y3), (x4, y4) format. (x1,y1) is the top left, and coordinates go clockwise.\n- `children` - the child blocks.\n\nThe child blocks have two additional keys:\n\n- `section_hierarchy` - indicates the sections that the block is part of. `1` indicates an h1 tag, `2` an h2, and so on.\n- `images` - base64 encoded images. The key will be the block id, and the data will be the encoded image.\n\nNote that child blocks of pages can have their own children as well (a tree structure).\n\n```json\n{\n \"id\": \"/page/10/Page/366\",\n \"block_type\": \"Page\",\n \"html\": \"<content-ref src='/page/10/SectionHeader/0'></content-ref><content-ref src='/page/10/SectionHeader/1'></content-ref><content-ref src='/page/10/Text/2'></content-ref><content-ref src='/page/10/Text/3'></content-ref><content-ref src='/page/10/Figure/4'></content-ref><content-ref src='/page/10/SectionHeader/5'></content-ref><content-ref src='/page/10/SectionHeader/6'></content-ref><content-ref src='/page/10/TextInlineMath/7'></content-ref><content-ref src='/page/10/TextInlineMath/8'></content-ref><content-ref src='/page/10/Table/9'></content-ref><content-ref src='/page/10/SectionHeader/10'></content-ref><content-ref src='/page/10/Text/11'></content-ref>\",\n \"polygon\": [[0.0, 0.0], [612.0, 0.0], [612.0, 792.0], [0.0, 792.0]],\n \"children\": [\n {\n \"id\": \"/page/10/SectionHeader/0\",\n \"block_type\": \"SectionHeader\",\n \"html\": \"<h1>Supplementary Material for <i>Subspace Adversarial Training</i> </h1>\",\n \"polygon\": [\n [217.845703125, 80.630859375], [374.73046875, 80.630859375],\n [374.73046875, 107.0],\n [217.845703125, 107.0]\n ],\n \"children\": null,\n \"section_hierarchy\": {\n \"1\": \"/page/10/SectionHeader/1\"\n },\n \"images\": {}\n },\n ...\n ]\n }\n\n\n```\n\n## Metadata\n\nAll output formats will return a metadata dictionary, with the following fields:\n\n```json\n{\n \"table_of_contents\": [\n {\n \"title\": \"Introduction\",\n \"heading_level\": 1,\n \"page_id\": 0,\n \"polygon\": [...]\n }\n ], // computed PDF table of contents\n \"page_stats\": [\n {\n \"page_id\": 0, \n \"text_extraction_method\": \"pdftext\",\n \"block_counts\": [(\"Span\", 200), ...]\n },\n ...\n ]\n}\n```\n\n# Internals\n\nMarker is easy to extend. The core units of marker are:\n\n- `Providers`, at `marker/providers`. These provide information from a source file, like a PDF.\n- `Builders`, at `marker/builders`. These generate the initial document blocks and fill in text, using info from the providers.\n- `Processors`, at `marker/processors`. These process specific blocks, for example the table formatter is a processor.\n- `Renderers`, at `marker/renderers`. These use the blocks to render output.\n- `Schema`, at `marker/schema`. The classes for all the block types.\n- `Converters`, at `marker/converters`. They run the whole end to end pipeline.\n\nTo customize processing behavior, override the `processors`. To add new output formats, write a new `renderer`. For additional input formats, write a new `provider.`\n\nProcessors and renderers can be directly passed into the base `PDFConverter`, so you can specify your own custom processing easily.\n\n## API server\n\nThere is a very simple API server you can run like this:\n\n```shell\npip install -U uvicorn fastapi python-multipart\nmarker_server --port 8001\n```\n\nThis will start a fastapi server that you can access at `localhost:8001`. You can go to `localhost:8001/docs` to see the endpoint options.\n\nYou can send requests like this:\n\n```\nimport requests\nimport json\n\npost_data = {\n 'filepath': 'FILEPATH',\n # Add other params here\n}\n\nrequests.post(\"http://localhost:8001/marker\", data=json.dumps(post_data)).json()\n```\n\nNote that this is not a very robust API, and is only intended for small-scale use. If you want to use this server, but want a more robust conversion option, you can use the hosted [Datalab API](https://www.datalab.to/plans).\n\n# Troubleshooting\n\nThere are some settings that you may find useful if things aren't working the way you expect:\n\n- Make sure to set `force_ocr` if you see garbled text - this will re-OCR the document.\n- `TORCH_DEVICE` - set this to force marker to use a given torch device for inference.\n- If you're getting out of memory errors, decrease worker count. You can also try splitting up long PDFs into multiple files.\n\n## Debugging\n\nPass the `debug` option to activate debug mode. This will save images of each page with detected layout and text, as well as output a json file with additional bounding box information.\n\n# Benchmarks\n\nBenchmarking PDF extraction quality is hard. I've created a test set by finding books and scientific papers that have a pdf version and a latex source. I convert the latex to text, and compare the reference to the output of text extraction methods. It's noisy, but at least directionally correct.\n\n**Speed**\n\n| Method | Average Score | Time per page | Time per document |\n|---------|----------------|---------------|------------------|\n| marker | 0.625115 | 0.234184 | 21.545 |\n\n**Accuracy**\n\n| Method | thinkpython.pdf | switch_trans.pdf | thinkdsp.pdf | crowd.pdf | thinkos.pdf | multicolcnn.pdf |\n|---------|----------------|-----------------|--------------|------------|-------------|----------------|\n| marker | 0.720347 | 0.592002 | 0.70468 | 0.515082 | 0.701394 | 0.517184 |\n\nPeak GPU memory usage during the benchmark is `6GB` for marker. Benchmarks were run on an A10.\n\n**Throughput**\n\nMarker takes about 6GB of VRAM on average per task, so you can convert 8 documents in parallel on an A6000.\n\n![Benchmark results](data/images/per_doc.png)\n\n## Running your own benchmarks\n\nYou can benchmark the performance of marker on your machine. Install marker manually with:\n\n```shell\ngit clone https://github.com/VikParuchuri/marker.git\npoetry install\n```\n\nDownload the benchmark data [here](https://drive.google.com/file/d/1ZSeWDo2g1y0BRLT7KnbmytV2bjWARWba/view?usp=sharing) and unzip. Then run the overall benchmark like this:\n\n```shell\npython benchmarks/overall.py data/pdfs data/references report.json\n```\n\n# Thanks\n\nThis work would not have been possible without amazing open source models and datasets, including (but not limited to):\n\n- Surya\n- Texify\n- Pypdfium2/pdfium\n- DocLayNet from IBM\n\nThank you to the authors of these models and datasets for making them available to the community!",
"bugtrack_url": null,
"license": "GPL-3.0-or-later",
"summary": "Convert PDF to markdown with high speed and accuracy.",
"version": "1.1.0",
"project_urls": {
"Homepage": "https://github.com/VikParuchuri/marker",
"Repository": "https://github.com/VikParuchuri/marker"
},
"split_keywords": [
"pdf",
" markdown",
" ocr",
" nlp"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0a72d4dc8fa62036a71fcbe63a44dcaaee4fce131f9e862f4f03b2032ffc78ea",
"md5": "7d99ee157d1e0d8f1dbd69be3168a6f2",
"sha256": "a321f13ca4bcb8e39692fd8106d0e4a939f73d3ea39e04a8886a2e4805791c84"
},
"downloads": -1,
"filename": "marker_pdf-1.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7d99ee157d1e0d8f1dbd69be3168a6f2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 86228,
"upload_time": "2024-12-12T19:10:53",
"upload_time_iso_8601": "2024-12-12T19:10:53.893265Z",
"url": "https://files.pythonhosted.org/packages/0a/72/d4dc8fa62036a71fcbe63a44dcaaee4fce131f9e862f4f03b2032ffc78ea/marker_pdf-1.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "79d4cb2c75cc3494578e97ce5d9a97c6e134f71d019b8c4c6f11fa31bb066f10",
"md5": "3368759c3c9dc506231069c27664db6b",
"sha256": "d70ae78519ca78485773d6ad4ed50a4e4c87a0a7d467d138bf19e0ccf09ff4c5"
},
"downloads": -1,
"filename": "marker_pdf-1.1.0.tar.gz",
"has_sig": false,
"md5_digest": "3368759c3c9dc506231069c27664db6b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 65749,
"upload_time": "2024-12-12T19:10:58",
"upload_time_iso_8601": "2024-12-12T19:10:58.394571Z",
"url": "https://files.pythonhosted.org/packages/79/d4/cb2c75cc3494578e97ce5d9a97c6e134f71d019b8c4c6f11fa31bb066f10/marker_pdf-1.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-12 19:10:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "VikParuchuri",
"github_project": "marker",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "marker-pdf"
}