insights-extractor


Nameinsights-extractor JSON
Version 0.1.1 PyPI version JSON
download
home_page
SummaryEfficient PDF analysis, text extraction, preprocessing, and pattern recognition with customizable configurations and utilities.
upload_time2023-04-14 04:09:46
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords nltk pymupdf camelot opencv ghostscript insight-extractor pdf extraction pdf data extraction pdf data classification keyword extraction
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Extractlib

This is a Python package that provides a set of tools and utilities for processing and analyzing PDF documents. It includes functionality for extracting text and tables from PDFs, cleaning and preprocessing text data, and analyzing content for keywords and patterns. The package also provides a number of configuration options for customizing the behavior of the tools and utilities, making it flexible and easy to use in a variety of different contexts. Whether you need to extract data from PDF documents for data analysis, or analyze PDF content for specific keywords or patterns, this package provides the tools you need to get the job done quickly and efficiently.


## Dependency Overview
### Python Dependency Install
This project leverages packages 
<pre>
nltk == 3.8.1 
PyMuPDF == 1.21.1 
camelot-py == 0.11.0 
opencv-python == 4.7.0.72 
ghostscript == 0.7
</pre>

## manually install supporting binaries

### camelot dependencies
- https://camelot-py.readthedocs.io/en/master/user/install-deps.html#install-deps

#### Ubuntu
<pre>$ apt install ghostscript python3-tk</pre>

#### MacOS
<pre>$ brew install ghostscript tcl-tk</pre>

## windows dependency installations
- Install ghostscript: https://ghostscript.com/releases/gsdnld.html
- Install Tinker: https://platform.activestate.com/activestate/activetcl-8.6/auto-fork?_ga=2.93217438.2024444162.1679060315-1994225326.1678735799
 
# Configuration
- configuration file should be placed in the root of the project and named 'extractlib.config.json'

## Example 'extractlib.config.json' File
### this file should be located in the root of your project
<pre>
{
  "std_out_logging": true,
  "supported_file_types": [".pdf"],
  "invalid_content_regexs": ["X{2,}"],
  "stop_words": [
    "na",
    "dependent",
    "address",
    "plans",
    "network",
    "nonnetwork",
    "additional",
    "covered"
  ],
  "keywords": {
    "dental": 5,
    "vision": 5,    
    "life": 5,
    "disability": 5
  },
  "keyword_synonyms": {
    "dental": ["orthodontic", "Endo", "Perio", "Oral"],
    "vision": ["eye", "vision", "lens", "lenses", "contact", "contacts"],
    "life": [
      "accident",
      "critical",
      "illness",
      "accidental",
      "dismember",
      "AD&D"
    ],
    "disability": []
  },
  "word_min_length": 3
}
</pre>

# access config variables
<pre>
from extractlib.settings import config

print(json.dump(config.config_raw, indent=4))
</pre>
# Example implementation
<pre>
from extractlib.document.process import process_document
import json

def main(file: str):
    result = process_document(file,  exclude_pages=[2,3], use_multithreading=False, split_pages_output_dir='./output', delete_split_pages=False)
    # Save the HTML content to a temporary file
    with open('temp.json', 'w') as f:
        json.dump(result, f, indent=4)


if __name__ == '__main__':

    # get working directory
    import os
    target_dir = os.path.dirname(os.path.abspath(__file__))
    main(f'{target_dir}/_testdata/PDF.pdf')
</pre>

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "insights-extractor",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nltk,Pymupdf,Camelot,OpenCV,Ghostscript,insight-extractor,pdf extraction,pdf data extraction,pdf data,classification,keyword extraction",
    "author": "",
    "author_email": "Trae Moore <trae.dev@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/2c/92/ca9d8d3192fe06f1b5305332636748d655ea5639568b846c62cfc7cdbe84/insights-extractor-0.1.1.tar.gz",
    "platform": null,
    "description": "# Extractlib\n\nThis is a Python package that provides a set of tools and utilities for processing and analyzing PDF documents. It includes functionality for extracting text and tables from PDFs, cleaning and preprocessing text data, and analyzing content for keywords and patterns. The package also provides a number of configuration options for customizing the behavior of the tools and utilities, making it flexible and easy to use in a variety of different contexts. Whether you need to extract data from PDF documents for data analysis, or analyze PDF content for specific keywords or patterns, this package provides the tools you need to get the job done quickly and efficiently.\n\n\n## Dependency Overview\n### Python Dependency Install\nThis project leverages packages \n<pre>\nnltk == 3.8.1 \nPyMuPDF == 1.21.1 \ncamelot-py == 0.11.0 \nopencv-python == 4.7.0.72 \nghostscript == 0.7\n</pre>\n\n## manually install supporting binaries\n\n### camelot dependencies\n- https://camelot-py.readthedocs.io/en/master/user/install-deps.html#install-deps\n\n#### Ubuntu\n<pre>$ apt install ghostscript python3-tk</pre>\n\n#### MacOS\n<pre>$ brew install ghostscript tcl-tk</pre>\n\n## windows dependency installations\n- Install ghostscript: https://ghostscript.com/releases/gsdnld.html\n- Install Tinker: https://platform.activestate.com/activestate/activetcl-8.6/auto-fork?_ga=2.93217438.2024444162.1679060315-1994225326.1678735799\n \n# Configuration\n- configuration file should be placed in the root of the project and named 'extractlib.config.json'\n\n## Example 'extractlib.config.json' File\n### this file should be located in the root of your project\n<pre>\n{\n  \"std_out_logging\": true,\n  \"supported_file_types\": [\".pdf\"],\n  \"invalid_content_regexs\": [\"X{2,}\"],\n  \"stop_words\": [\n    \"na\",\n    \"dependent\",\n    \"address\",\n    \"plans\",\n    \"network\",\n    \"nonnetwork\",\n    \"additional\",\n    \"covered\"\n  ],\n  \"keywords\": {\n    \"dental\": 5,\n    \"vision\": 5,    \n    \"life\": 5,\n    \"disability\": 5\n  },\n  \"keyword_synonyms\": {\n    \"dental\": [\"orthodontic\", \"Endo\", \"Perio\", \"Oral\"],\n    \"vision\": [\"eye\", \"vision\", \"lens\", \"lenses\", \"contact\", \"contacts\"],\n    \"life\": [\n      \"accident\",\n      \"critical\",\n      \"illness\",\n      \"accidental\",\n      \"dismember\",\n      \"AD&D\"\n    ],\n    \"disability\": []\n  },\n  \"word_min_length\": 3\n}\n</pre>\n\n# access config variables\n<pre>\nfrom extractlib.settings import config\n\nprint(json.dump(config.config_raw, indent=4))\n</pre>\n# Example implementation\n<pre>\nfrom extractlib.document.process import process_document\nimport json\n\ndef main(file: str):\n    result = process_document(file,  exclude_pages=[2,3], use_multithreading=False, split_pages_output_dir='./output', delete_split_pages=False)\n    # Save the HTML content to a temporary file\n    with open('temp.json', 'w') as f:\n        json.dump(result, f, indent=4)\n\n\nif __name__ == '__main__':\n\n    # get working directory\n    import os\n    target_dir = os.path.dirname(os.path.abspath(__file__))\n    main(f'{target_dir}/_testdata/PDF.pdf')\n</pre>\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Efficient PDF analysis, text extraction, preprocessing, and pattern recognition with customizable configurations and utilities.",
    "version": "0.1.1",
    "split_keywords": [
        "nltk",
        "pymupdf",
        "camelot",
        "opencv",
        "ghostscript",
        "insight-extractor",
        "pdf extraction",
        "pdf data extraction",
        "pdf data",
        "classification",
        "keyword extraction"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a40e667204c1e29d4deccd6a414cc4996684f44ed58b63ffb320eaee19c5df8c",
                "md5": "7eccd118c21117525c4fa648d3ab539f",
                "sha256": "8450a82153c545cc26dca8c36bb10b51afa4d3cb289943d2febe1e36d17157c4"
            },
            "downloads": -1,
            "filename": "insights_extractor-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7eccd118c21117525c4fa648d3ab539f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 17721,
            "upload_time": "2023-04-14T04:09:44",
            "upload_time_iso_8601": "2023-04-14T04:09:44.000620Z",
            "url": "https://files.pythonhosted.org/packages/a4/0e/667204c1e29d4deccd6a414cc4996684f44ed58b63ffb320eaee19c5df8c/insights_extractor-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2c92ca9d8d3192fe06f1b5305332636748d655ea5639568b846c62cfc7cdbe84",
                "md5": "dac33f0a3db09c38eb355a844cfa2bd4",
                "sha256": "0c462eac078eb72afb2fc8443267ab7dc3213880c2942a282bf945ba13bf95c7"
            },
            "downloads": -1,
            "filename": "insights-extractor-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "dac33f0a3db09c38eb355a844cfa2bd4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 46800,
            "upload_time": "2023-04-14T04:09:46",
            "upload_time_iso_8601": "2023-04-14T04:09:46.345640Z",
            "url": "https://files.pythonhosted.org/packages/2c/92/ca9d8d3192fe06f1b5305332636748d655ea5639568b846c62cfc7cdbe84/insights-extractor-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-14 04:09:46",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "insights-extractor"
}
        
Elapsed time: 0.11088s