Name | pdf2md-llm JSON |
Version |
0.1.2
JSON |
| download |
home_page | None |
Summary | A package to convert PDF files to Markdown using a local LLM. |
upload_time | 2025-03-07 08:37:50 |
maintainer | None |
docs_url | None |
author | Leon Eversberg |
requires_python | >=3.10 |
license | MIT License Copyright (c) 2025 Leon Eversberg Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
markdown
pdf
llm
parser
converter
|
VCS |
 |
bugtrack_url |
|
requirements |
pdf2img
vllm
qwen-vl-utils
accelerate
transformers
torch
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# pdf2md_llm
`pdf2md_llm` is a Python package that converts PDF files to Markdown using a local Large Language Model (LLM).
The package leverages the `pdf2image` library to convert PDF pages to images and a vision language model to generate Markdown text from these images.
## Features
- Convert PDF files to images.
- Generate Markdown text from images using a local LLM.
- Keep your data private. No third-party file uploads.
## Installation
You need a CUDA compatible GPU to run local LLMs with vLLM.
You can use `pip` to install the package:
```bash
pip install pdf2md-llm
```
## Usage
### CLI
You can use the `pdf2md_llm` package via the **command line interface (CLI)**.
To convert a PDF file to Markdown, run the following command:
```bash
pdf2md_llm <pdf_file> [options]
```
#### Options
* `pdf_file`: Path to the PDF file to convert.
* `--model`: Name of the model to use (default: `Qwen/Qwen2.5-VL-3B-Instruct-AWQ`).
* `--dtype`: Data type for the model weights and activations (default: `None`).
* `--max_model_len`: Max model context length (default: `7000`).
* `--size`: Image size as a tuple (default: `(700, None)`).
* `--dpi`: DPI of the images (default: `200`).
* `--fmt`: Image format (default: `jpeg`).
* `--output_folder`: Folder to save the output Markdown file (default: `./out`).
#### Example
```bash
pdf2md_llm example.pdf --model "Qwen/Qwen2.5-VL-3B-Instruct-AWQ" --output_folder "./output"
```
##### Model Support:
Currently the following Qwen2.5-VL models are supported:
* `Qwen/Qwen2.5-VL-3B-Instruct`
* `Qwen/Qwen2.5-VL-3B-Instruct-AWQ`
* `Qwen/Qwen2.5-VL-7B-Instruct`
* `Qwen/Qwen2.5-VL-7B-Instruct-AWQ`
* `Qwen/Qwen2.5-VL-72B-Instruct`
* `Qwen/Qwen2.5-VL-72B-Instruct-AWQ`
If you want to use a different model, feel free to add a vLLM compatible model to the factory function `llm_model()` in `llm.py`
### Python API
You can use the `pdf2md_llm` package via the **Python API**.
Basic usage:
```python
from vllm import SamplingParams
from pdf2md_llm.llm import llm_model
from pdf2md_llm.pdf2img import PdfToImg
pdf2img = PdfToImg(size=(700, None), output_folder="./out")
img_files = pdf2img.convert("example.pdf")
llm = llm_model(
model="Qwen/Qwen2.5-VL-3B-Instruct-AWQ",
dtype="half",
max_num_seqs=1,
max_model_len=7000,
)
sampling_params = SamplingParams(
temperature=0.1,
min_p=0.1,
max_tokens=8192,
stop_token_ids=[],
)
# Append all pages to one output Markdown file
for img_file in img_files:
markdown_text = llm.generate(
img_file, sampling_params=sampling_params
) # convert image to Markdown with LLM
with open("example.md", "a", encoding="utf-8") as myfile:
myfile.write(markdown_text)
```
For a full example, see [example_api.py](./pdf2md_llm/example_api.py)
## License
This project is licensed under the MIT License. See the LICENSE file for details.
## Acknowledgements
* [pdf2image](https://github.com/Belval/pdf2image) for converting PDF files to images.
* [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) LLM model
* [vLLM](https://github.com/vllm-project/vllm) for efficient LLM model inference
Raw data
{
"_id": null,
"home_page": null,
"name": "pdf2md-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "markdown, pdf, llm, parser, converter",
"author": "Leon Eversberg",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/ae/8c/d76733999e4b1968e4a7d15daa11580ec6f1c25b6b5e3033e5173b7b171c/pdf2md_llm-0.1.2.tar.gz",
"platform": null,
"description": "# pdf2md_llm\n\n`pdf2md_llm` is a Python package that converts PDF files to Markdown using a local Large Language Model (LLM). \n\nThe package leverages the `pdf2image` library to convert PDF pages to images and a vision language model to generate Markdown text from these images.\n\n## Features\n\n- Convert PDF files to images.\n- Generate Markdown text from images using a local LLM.\n- Keep your data private. No third-party file uploads. \n\n## Installation\n\nYou need a CUDA compatible GPU to run local LLMs with vLLM.\n\nYou can use `pip` to install the package:\n\n```bash\npip install pdf2md-llm\n```\n## Usage\n\n### CLI\n\nYou can use the `pdf2md_llm` package via the **command line interface (CLI)**.\n\nTo convert a PDF file to Markdown, run the following command:\n\n```bash\npdf2md_llm <pdf_file> [options]\n```\n\n#### Options\n\n* `pdf_file`: Path to the PDF file to convert.\n* `--model`: Name of the model to use (default: `Qwen/Qwen2.5-VL-3B-Instruct-AWQ`).\n* `--dtype`: Data type for the model weights and activations (default: `None`).\n* `--max_model_len`: Max model context length (default: `7000`).\n* `--size`: Image size as a tuple (default: `(700, None)`).\n* `--dpi`: DPI of the images (default: `200`).\n* `--fmt`: Image format (default: `jpeg`).\n* `--output_folder`: Folder to save the output Markdown file (default: `./out`).\n\n#### Example\n\n```bash\npdf2md_llm example.pdf --model \"Qwen/Qwen2.5-VL-3B-Instruct-AWQ\" --output_folder \"./output\"\n```\n\n##### Model Support:\nCurrently the following Qwen2.5-VL models are supported: \n\n* `Qwen/Qwen2.5-VL-3B-Instruct`\n* `Qwen/Qwen2.5-VL-3B-Instruct-AWQ`\n* `Qwen/Qwen2.5-VL-7B-Instruct`\n* `Qwen/Qwen2.5-VL-7B-Instruct-AWQ`\n* `Qwen/Qwen2.5-VL-72B-Instruct`\n* `Qwen/Qwen2.5-VL-72B-Instruct-AWQ`\n\nIf you want to use a different model, feel free to add a vLLM compatible model to the factory function `llm_model()` in `llm.py`\n\n### Python API\n\nYou can use the `pdf2md_llm` package via the **Python API**.\n\nBasic usage:\n\n```python\nfrom vllm import SamplingParams\n\nfrom pdf2md_llm.llm import llm_model\nfrom pdf2md_llm.pdf2img import PdfToImg\n\npdf2img = PdfToImg(size=(700, None), output_folder=\"./out\")\nimg_files = pdf2img.convert(\"example.pdf\")\n\nllm = llm_model(\n model=\"Qwen/Qwen2.5-VL-3B-Instruct-AWQ\",\n dtype=\"half\",\n max_num_seqs=1,\n max_model_len=7000,\n)\n\nsampling_params = SamplingParams(\n temperature=0.1,\n min_p=0.1,\n max_tokens=8192,\n stop_token_ids=[],\n)\n\n# Append all pages to one output Markdown file\nfor img_file in img_files:\n markdown_text = llm.generate(\n img_file, sampling_params=sampling_params\n ) # convert image to Markdown with LLM\n with open(\"example.md\", \"a\", encoding=\"utf-8\") as myfile:\n myfile.write(markdown_text)\n```\n\nFor a full example, see [example_api.py](./pdf2md_llm/example_api.py)\n\n\n## License\n\nThis project is licensed under the MIT License. See the LICENSE file for details.\n\n## Acknowledgements\n\n* [pdf2image](https://github.com/Belval/pdf2image) for converting PDF files to images.\n\n* [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) LLM model\n\n* [vLLM](https://github.com/vllm-project/vllm) for efficient LLM model inference\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2025 Leon Eversberg Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "A package to convert PDF files to Markdown using a local LLM.",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/leoneversberg/pdf2md_llm",
"Repository": "https://github.com/leoneversberg/pdf2md_llm.git"
},
"split_keywords": [
"markdown",
" pdf",
" llm",
" parser",
" converter"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "59b802aade81da024cb40de2b5283702cda30469e6380f49cf3e5651ed2f8dff",
"md5": "3fca9d6a78ad831271a2b0e032b41b92",
"sha256": "467131d7c1e70778cfed29c45dbc09b2e2ed3b2a477cc49c67b2a13783c5f486"
},
"downloads": -1,
"filename": "pdf2md_llm-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3fca9d6a78ad831271a2b0e032b41b92",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 10413,
"upload_time": "2025-03-07T08:37:49",
"upload_time_iso_8601": "2025-03-07T08:37:49.156974Z",
"url": "https://files.pythonhosted.org/packages/59/b8/02aade81da024cb40de2b5283702cda30469e6380f49cf3e5651ed2f8dff/pdf2md_llm-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ae8cd76733999e4b1968e4a7d15daa11580ec6f1c25b6b5e3033e5173b7b171c",
"md5": "debc7ca9dfbc28a66bed326d2130a988",
"sha256": "4caec22862361c577584d6fc37c426193b7e68bbce8861cdfb1652e43a99dc61"
},
"downloads": -1,
"filename": "pdf2md_llm-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "debc7ca9dfbc28a66bed326d2130a988",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 9315,
"upload_time": "2025-03-07T08:37:50",
"upload_time_iso_8601": "2025-03-07T08:37:50.598731Z",
"url": "https://files.pythonhosted.org/packages/ae/8c/d76733999e4b1968e4a7d15daa11580ec6f1c25b6b5e3033e5173b7b171c/pdf2md_llm-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-07 08:37:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "leoneversberg",
"github_project": "pdf2md_llm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pdf2img",
"specs": [
[
"~=",
"0.1.2"
]
]
},
{
"name": "vllm",
"specs": [
[
"==",
"0.7.3"
]
]
},
{
"name": "qwen-vl-utils",
"specs": [
[
"~=",
"0.0.10"
]
]
},
{
"name": "accelerate",
"specs": [
[
"~=",
"1.4.0"
]
]
},
{
"name": "transformers",
"specs": [
[
"~=",
"4.49.0"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"2.5.1"
]
]
}
],
"lcname": "pdf2md-llm"
}