# DocParser 📄
DocParser is a powerful tool for LLM traning and other application, for examples: RAG, which support to parse multi type file, includes:
## Feature 🎉
### File types supported for parsing:
- [Pdf](#Pdf): Use OCR to parse PDF documents and output text in markdown format. The parsing results can be used for LLM pretrain, RAG, etc.
- [Html](#Html): Use [jina](https://jina.ai/reader) to parse multi html pages and output text in markdown.
## Install
From pip:
```bash
pip install docparser_feb
```
From repository:
```bash
pip install git+https://github.com/feb-co/DocParser.git
```
Or install it directly through the installation package:
```bash
git clone https://github.com/feb-co/DocParser.git
cd DocParser
pip install -e .
```
## API/Functional
### Pdf
#### From CLI
You can run the following script to get the pdf parsing results:
```shell
export LOG_LEVEL="ERROR"
export DOC_PARSER_MODEL_DIR="xxx"
export DOC_PARSER_OPENAI_URL="xxx"
export DOC_PARSER_OPENAI_KEY="xxx"
export DOC_PARSER_OPENAI_MODEL="gpt-4-0125-preview"
export DOC_PARSER_OPENAI_RETRY="3"
docparser-pdf \
--inputs path/to/file.pdf or path/to/directory \
--output_dir output_directory \
--page_range '0:1' --mode 'figure latex' \
--rendering --use_llm --overwrite_result
```
The following is a description of the relevant parameters:
```bash
usage: docparser-pdf [-h] --inputs INPUTS --output_dir OUTPUT_DIR [--page_range PAGE_RANGE] [--mode {plain,figure placehold,figure latex}] [--rendering] [--use_llm]
options:
-h, --help show this help message and exit
--inputs INPUTS Directory where to store PDFs, or a file path to a single PDF
--output_dir OUTPUT_DIR
Directory where to store the output results (md/json/images).
--page_range PAGE_RANGE
The page range to parse the PDF, the format is 'start_page:end_page', that is, [start, end). Default: full.
--mode {plain,figure placehold,figure latex}
The mode for parsing the PDF, to extract only the plain text or the text plus images.
--rendering Is it necessary to render the recognition results of the input PDF to output the recognition range? Default: False.
--use_llm Do you need to use LLM to format the parsing results? If so, please specify the corresponding parameters through the environment variables: DOC_PARSER_OPENAI_URL, DOC_PARSER_OPENAI_KEY, DOC_PARSER_OPENAI_MODEL. Default: False.
--overwrite_result If the parsed target file already exists, should it be rewritten? Default: False.
```
#### From Python
### Html
#### From CLI
You can run the following script to get the html parsing results:
```bash
docparser-html https://github.com/mem0ai/mem0
```
The following is a description of the relevant parameters:
#### From Python
Raw data
{
"_id": null,
"home_page": null,
"name": "docparser-feb",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "pdf, LLM, ChatGPT, transformer, pytorch, deep learning",
"author": null,
"author_email": "Licheng Wang <244267620@qq.com>",
"download_url": "https://files.pythonhosted.org/packages/95/e8/279f26a16735d0e12e45d2249f290819da4c586f347036da52ddf83efbef/docparser_feb-0.1.5.tar.gz",
"platform": null,
"description": "# DocParser \ud83d\udcc4\n\nDocParser is a powerful tool for LLM traning and other application, for examples: RAG, which support to parse multi type file, includes:\n\n## Feature \ud83c\udf89\n\n### File types supported for parsing:\n\n- [Pdf](#Pdf): Use OCR to parse PDF documents and output text in markdown format. The parsing results can be used for LLM pretrain, RAG, etc.\n- [Html](#Html): Use [jina](https://jina.ai/reader) to parse multi html pages and output text in markdown.\n\n## Install\n\nFrom pip:\n\n```bash\npip install docparser_feb\n```\n\nFrom repository:\n\n```bash\npip install git+https://github.com/feb-co/DocParser.git\n```\n\nOr install it directly through the installation package:\n\n```bash\ngit clone https://github.com/feb-co/DocParser.git\ncd DocParser\npip install -e .\n```\n\n## API/Functional\n\n### Pdf\n\n#### From CLI\n\nYou can run the following script to get the pdf parsing results:\n\n```shell\nexport LOG_LEVEL=\"ERROR\"\nexport DOC_PARSER_MODEL_DIR=\"xxx\"\nexport DOC_PARSER_OPENAI_URL=\"xxx\"\nexport DOC_PARSER_OPENAI_KEY=\"xxx\"\nexport DOC_PARSER_OPENAI_MODEL=\"gpt-4-0125-preview\"\nexport DOC_PARSER_OPENAI_RETRY=\"3\"\ndocparser-pdf \\\n --inputs path/to/file.pdf or path/to/directory \\\n --output_dir output_directory \\\n --page_range '0:1' --mode 'figure latex' \\\n --rendering --use_llm --overwrite_result\n```\n\nThe following is a description of the relevant parameters:\n\n```bash\nusage: docparser-pdf [-h] --inputs INPUTS --output_dir OUTPUT_DIR [--page_range PAGE_RANGE] [--mode {plain,figure placehold,figure latex}] [--rendering] [--use_llm]\n\noptions:\n -h, --help show this help message and exit\n --inputs INPUTS Directory where to store PDFs, or a file path to a single PDF\n --output_dir OUTPUT_DIR\n Directory where to store the output results (md/json/images).\n --page_range PAGE_RANGE\n The page range to parse the PDF, the format is 'start_page:end_page', that is, [start, end). Default: full.\n --mode {plain,figure placehold,figure latex}\n The mode for parsing the PDF, to extract only the plain text or the text plus images.\n --rendering Is it necessary to render the recognition results of the input PDF to output the recognition range? Default: False.\n --use_llm Do you need to use LLM to format the parsing results? If so, please specify the corresponding parameters through the environment variables: DOC_PARSER_OPENAI_URL, DOC_PARSER_OPENAI_KEY, DOC_PARSER_OPENAI_MODEL. Default: False.\n --overwrite_result If the parsed target file already exists, should it be rewritten? Default: False.\n```\n\n#### From Python\n\n\n### Html\n\n#### From CLI\n\nYou can run the following script to get the html parsing results:\n\n```bash\ndocparser-html https://github.com/mem0ai/mem0\n```\n\nThe following is a description of the relevant parameters:\n\n#### From Python\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Document parsing tool for LLM training and Rag",
"version": "0.1.5",
"project_urls": {
"Homepage": "https://github.com/feb-co/DocParser"
},
"split_keywords": [
"pdf",
" llm",
" chatgpt",
" transformer",
" pytorch",
" deep learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4353deb7b830b2cb3f5ee73d1d4a473c615270a6beeea8f05646fbf2834819e0",
"md5": "b6284b97d015439aa2e83faff250fd9a",
"sha256": "30b5073ae113a410e93c36da8ae103e13d892f7188cb7a43361896c46c29e76e"
},
"downloads": -1,
"filename": "docparser_feb-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b6284b97d015439aa2e83faff250fd9a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 436956,
"upload_time": "2024-10-09T15:25:35",
"upload_time_iso_8601": "2024-10-09T15:25:35.484976Z",
"url": "https://files.pythonhosted.org/packages/43/53/deb7b830b2cb3f5ee73d1d4a473c615270a6beeea8f05646fbf2834819e0/docparser_feb-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "95e8279f26a16735d0e12e45d2249f290819da4c586f347036da52ddf83efbef",
"md5": "53036b496cc2dec3bc6acec7f2507f15",
"sha256": "c4483878537bab79e9a77de2aa39536ad9bd48f16dd79f3073e4022211c15ed2"
},
"downloads": -1,
"filename": "docparser_feb-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "53036b496cc2dec3bc6acec7f2507f15",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 424251,
"upload_time": "2024-10-09T15:25:37",
"upload_time_iso_8601": "2024-10-09T15:25:37.064110Z",
"url": "https://files.pythonhosted.org/packages/95/e8/279f26a16735d0e12e45d2249f290819da4c586f347036da52ddf83efbef/docparser_feb-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-09 15:25:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "feb-co",
"github_project": "DocParser",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "docparser-feb"
}