# gptpdf
<p align="center">
<a href="README_CN.md"><img src="https://img.shields.io/badge/文档-中文版-blue.svg" alt="CN doc"></a>
<a href="README.md"><img src="https://img.shields.io/badge/document-English-blue.svg" alt="EN doc"></a>
</p>
Using VLLM (like GPT-4o) to parse PDF into markdown.
Our approach is very simple (only 293 lines of code), but can almost perfectly parse typography, math formulas, tables, pictures, charts, etc.
Average cost per page: $0.013
This package use [GeneralAgent](https://github.com/CosmosShadow/GeneralAgent) lib to interact with OpenAI API.
[pdfgpt-ui](https://github.com/daodao97/gptpdf-ui) is a visual tool based on gptpdf.
## Process steps
1. Use the PyMuPDF library to parse the PDF to find all non-text areas and mark them, for example:
![](docs/demo.jpg)
2. Use a large visual model (such as GPT-4o) to parse and get a markdown file.
## DEMO
1. [examples/attention_is_all_you_need/output.md](examples/attention_is_all_you_need/output.md) for PDF [examples/attention_is_all_you_need.pdf](examples/attention_is_all_you_need.pdf).
2. [examples/rh/output.md](examples/rh/output.md) for PDF [examples/rh.pdf](examples/rh.pdf).
## Installation
```bash
pip install gptpdf
```
## Usage
```python
from gptpdf import parse_pdf
api_key = 'Your OpenAI API Key'
content, image_paths = parse_pdf(pdf_path, api_key=api_key)
print(content)
```
See more in [test/test.py](test/test.py)
## API
### parse_pdf
**Function**:
```
def parse_pdf(
pdf_path: str,
output_dir: str = './',
prompt: Optional[Dict] = None,
api_key: Optional[str] = None,
base_url: Optional[str] = None,
model: str = 'gpt-4o',
verbose: bool = False,
gpt_worker: int = 1
) -> Tuple[str, List[str]]:
```
Parses a PDF file into a Markdown file and returns the Markdown content along with all image paths.
**Parameters**:
- **pdf_path**: *str*
Path to the PDF file
- **output_dir**: *str*, default: './'
Output directory to store all images and the Markdown file
- **api_key**: *Optional[str]*, optional
OpenAI API key. If not provided, the `OPENAI_API_KEY` environment variable will be used.
- **base_url**: *Optional[str]*, optional
OpenAI base URL. If not provided, the `OPENAI_BASE_URL` environment variable will be used. This can be modified to call other large model services with OpenAI API interfaces, such as `GLM-4V`.
- **model**: *str*, default: 'gpt-4o'
OpenAI API formatted multimodal large model. If you need to use other models, such as:
- [qwen-vl-max](https://help.aliyun.com/zh/dashscope/developer-reference/compatibility-of-openai-with-dashscope)
- [GLM-4V](https://open.bigmodel.cn/dev/api#glm-4v)
- [Yi-Vision](https://platform.lingyiwanwu.com/docs)
- Azure OpenAI, by setting the `base_url` to `https://xxxx.openai.azure.com/` to use Azure OpenAI, where `api_key` is the Azure API key, and the model is similar to `azure_xxxx`, where `xxxx` is the deployed model name (tested).
- **verbose**: *bool*, default: False
Verbose mode. When enabled, the content parsed by the large model will be displayed in the command line.
- **gpt_worker**: *int*, default: 1
Number of GPT parsing worker threads. If your machine has better performance, you can increase this value to speed up the parsing.
- **prompt**: *dict*, optional
If the model you are using does not match the default prompt provided in this repository and cannot achieve the best results, we support adding custom prompts. The prompts in the repository are divided into three parts:
- `prompt`: Mainly used to guide the model on how to process and convert text content in images.
- `rect_prompt`: Used to handle cases where specific areas (such as tables or images) are marked in the image.
- `role_prompt`: Defines the role of the model to ensure the model understands it is performing a PDF document parsing task.
You can pass custom prompts in the form of a dictionary to replace any of the prompts. Here is an example:
```python
prompt = {
"prompt": "Custom prompt text",
"rect_prompt": "Custom rect prompt",
"role_prompt": "Custom role prompt"
}
content, image_paths = parse_pdf(
pdf_path=pdf_path,
output_dir='./output',
model="gpt-4o",
prompt=prompt,
verbose=False,
)
- **args"": LLM other parameters, such as `temperature`, `top_p`, `max_tokens`, `presence_penalty`, `frequency_penalty`, etc.
## Join Us 👏🏻
Scan the QR code below with WeChat to join our group chat or contribute.
<p align="center">
<img src="./docs/wechat.jpg" alt="wechat" width=400/>
</p>
Raw data
{
"_id": null,
"home_page": "https://github.com/CosmosShadow/gptpdf",
"name": "gptpdf",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": null,
"author": "Chen Li",
"author_email": "lichenarthurdata@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/b5/13/d433d142851a0a0d1d03aacd549b6a649ba8fefce387468fc41ee0dcc23c/gptpdf-0.0.15.tar.gz",
"platform": null,
"description": "# gptpdf\n\n<p align=\"center\">\n<a href=\"README_CN.md\"><img src=\"https://img.shields.io/badge/\u6587\u6863-\u4e2d\u6587\u7248-blue.svg\" alt=\"CN doc\"></a>\n<a href=\"README.md\"><img src=\"https://img.shields.io/badge/document-English-blue.svg\" alt=\"EN doc\"></a>\n</p>\n\nUsing VLLM (like GPT-4o) to parse PDF into markdown.\n\nOur approach is very simple (only 293 lines of code), but can almost perfectly parse typography, math formulas, tables, pictures, charts, etc.\n\nAverage cost per page: $0.013\n\nThis package use [GeneralAgent](https://github.com/CosmosShadow/GeneralAgent) lib to interact with OpenAI API.\n\n[pdfgpt-ui](https://github.com/daodao97/gptpdf-ui) is a visual tool based on gptpdf.\n\n\n\n## Process steps\n\n1. Use the PyMuPDF library to parse the PDF to find all non-text areas and mark them, for example:\n\n![](docs/demo.jpg)\n\n2. Use a large visual model (such as GPT-4o) to parse and get a markdown file.\n\n\n\n## DEMO\n\n1. [examples/attention_is_all_you_need/output.md](examples/attention_is_all_you_need/output.md) for PDF [examples/attention_is_all_you_need.pdf](examples/attention_is_all_you_need.pdf).\n\n\n2. [examples/rh/output.md](examples/rh/output.md) for PDF [examples/rh.pdf](examples/rh.pdf).\n\n\n## Installation\n\n```bash\npip install gptpdf\n```\n\n\n\n## Usage\n\n```python\nfrom gptpdf import parse_pdf\napi_key = 'Your OpenAI API Key'\ncontent, image_paths = parse_pdf(pdf_path, api_key=api_key)\nprint(content)\n```\n\nSee more in [test/test.py](test/test.py)\n\n\n## API\n\n### parse_pdf\n\n**Function**: \n```\ndef parse_pdf(\n pdf_path: str,\n output_dir: str = './',\n prompt: Optional[Dict] = None,\n api_key: Optional[str] = None,\n base_url: Optional[str] = None,\n model: str = 'gpt-4o',\n verbose: bool = False,\n gpt_worker: int = 1\n) -> Tuple[str, List[str]]:\n```\n\nParses a PDF file into a Markdown file and returns the Markdown content along with all image paths.\n\n**Parameters**:\n\n- **pdf_path**: *str* \n Path to the PDF file\n\n- **output_dir**: *str*, default: './' \n Output directory to store all images and the Markdown file\n\n- **api_key**: *Optional[str]*, optional \n OpenAI API key. If not provided, the `OPENAI_API_KEY` environment variable will be used.\n\n- **base_url**: *Optional[str]*, optional \n OpenAI base URL. If not provided, the `OPENAI_BASE_URL` environment variable will be used. This can be modified to call other large model services with OpenAI API interfaces, such as `GLM-4V`.\n\n- **model**: *str*, default: 'gpt-4o' \n OpenAI API formatted multimodal large model. If you need to use other models, such as:\n - [qwen-vl-max](https://help.aliyun.com/zh/dashscope/developer-reference/compatibility-of-openai-with-dashscope) \n - [GLM-4V](https://open.bigmodel.cn/dev/api#glm-4v)\n - [Yi-Vision](https://platform.lingyiwanwu.com/docs) \n - Azure OpenAI, by setting the `base_url` to `https://xxxx.openai.azure.com/` to use Azure OpenAI, where `api_key` is the Azure API key, and the model is similar to `azure_xxxx`, where `xxxx` is the deployed model name (tested).\n\n- **verbose**: *bool*, default: False \n Verbose mode. When enabled, the content parsed by the large model will be displayed in the command line.\n\n- **gpt_worker**: *int*, default: 1 \n Number of GPT parsing worker threads. If your machine has better performance, you can increase this value to speed up the parsing.\n\n- **prompt**: *dict*, optional \n If the model you are using does not match the default prompt provided in this repository and cannot achieve the best results, we support adding custom prompts. The prompts in the repository are divided into three parts:\n - `prompt`: Mainly used to guide the model on how to process and convert text content in images.\n - `rect_prompt`: Used to handle cases where specific areas (such as tables or images) are marked in the image.\n - `role_prompt`: Defines the role of the model to ensure the model understands it is performing a PDF document parsing task.\n\n You can pass custom prompts in the form of a dictionary to replace any of the prompts. Here is an example:\n\n ```python\n prompt = {\n \"prompt\": \"Custom prompt text\",\n \"rect_prompt\": \"Custom rect prompt\",\n \"role_prompt\": \"Custom role prompt\"\n }\n\n content, image_paths = parse_pdf(\n pdf_path=pdf_path,\n output_dir='./output',\n model=\"gpt-4o\",\n prompt=prompt,\n verbose=False,\n )\n\n- **args\"\": LLM other parameters, such as `temperature`, `top_p`, `max_tokens`, `presence_penalty`, `frequency_penalty`, etc.\n \n## Join Us \ud83d\udc4f\ud83c\udffb\n\nScan the QR code below with WeChat to join our group chat or contribute.\n\n<p align=\"center\">\n<img src=\"./docs/wechat.jpg\" alt=\"wechat\" width=400/>\n</p>\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Using GPT to parse PDF",
"version": "0.0.15",
"project_urls": {
"Homepage": "https://github.com/CosmosShadow/gptpdf",
"Repository": "https://github.com/CosmosShadow/gptpdf"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b513d433d142851a0a0d1d03aacd549b6a649ba8fefce387468fc41ee0dcc23c",
"md5": "25b383c736f7b7f140e7adf1334d4ade",
"sha256": "0d2173a5cb5b4ea3956628ddce0eb6311baddc09974ac253a7df4823bd638c6d"
},
"downloads": -1,
"filename": "gptpdf-0.0.15.tar.gz",
"has_sig": false,
"md5_digest": "25b383c736f7b7f140e7adf1334d4ade",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 7557,
"upload_time": "2024-07-24T02:39:24",
"upload_time_iso_8601": "2024-07-24T02:39:24.798276Z",
"url": "https://files.pythonhosted.org/packages/b5/13/d433d142851a0a0d1d03aacd549b6a649ba8fefce387468fc41ee0dcc23c/gptpdf-0.0.15.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-24 02:39:24",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "CosmosShadow",
"github_project": "gptpdf",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "gptpdf"
}