amilib


Nameamilib JSON
Version 0.5.3 PyPI version JSON
download
home_pagehttps://github.com/petermr/amilib
SummaryDocument and dictionary download, cleaning, management
upload_time2025-05-06 08:19:21
maintainerNone
docs_urlNone
authorPeter Murray-Rust
requires_python<3.13,>=3.9
licenseApache2
keywords text and data mining
VCS
bugtrack_url
requirements chardet graphviz lxml networkx numpy pandas pdfminer.six pdfplumber Pillow PyMuPDF pytest requests selenium setuptools SPARQLWrapper webdriver-manager
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # amilib

Library to support downloading and analysis of documents, mainly from Open Access repositories, from 
published scholarly articles, or from authoritative sites such as UN IPCC or UNFCCC. The current version 2024-12-01 includes entry points but the longer term plan is to integrate with [docanalysis](https://github.com/petermr/docanalysis) and [pygetpapers](https://github.com/petermr/pygetpapers). That will give a onestop toolset for downloading Open Access articles/reports in bulk, making them semantic , and analysing them with NLP and AI/ML methods.

# components and tests

`amilib` is written as a set of libraries developed by test-driven-development (TDD). 
The main strategy is to tackle a real download/transform/analyse problem as a 
series of tests, and then abstract the tests into the library. 
The tests therefore act as a guide to functionality and simple how-tos. 
During development the libraries can be accessed through the command-line (`argparse`) 
and this is the current normal approach. 
(However we plan to move the main entry points for most users to `docanalysis`).

# main sub-libraries

This represents functionality at 2024-12-02. There are about 1000 non-trivial methods.

|Module|Function  |
|-------|----  |
|[amilib/ami_args.py](amilib/ami_args.py)|Abstract class for argparse option  |
|ami_bib.py|Bibliographic support  |
|ami_corpus.py|create, normalize, search, tranform a corpus  |
|ami_csv.py|CSV utilities  |
|ami_dict.py| Ami Dictionary|
|ami_graph.py| (stub)  |
|ami_html.py|large collection for manipulating HTML  |
|ami_integrate.py| miscellaneous conversions |
|ami_nlp.py| (stubs) |
|ami_pdf_libs.py|large PDF2HTML , includes pdfplumber |
|ami_svg.py| (stub) mainly convenience routiines |
|ami_util.py| lowlevel numeric/geom utilities |
|amidriver.py| headless browser |
|amix.py|  entry point for amilib main |
|bbox.py| bounding box manipulation |
|constants.py| (stub) |
|dict_args.py| dictionary options in argparse |
|file_lib.py| file utilities |
|headless_lib.py|  messy utility routines (may be obsolete)|
|html_args.py| HTML options in argparse |
|html_extra.py| stub obsolete? |
|html_generator.py|  messy ?obsolete|
|html_marker.py| markup HTML (messy) |
|pdf_args.py|  PDF options in argprase|
|search_args.py| search options in argparse |
|util.py| many scattered utilities |
|wikimedia.py| download and conversion of Wikimedia|
|xml_lib.py| convenience routies (lxml wrappers)|


# commands and subcommands

`amilib` has an `argparse` dommand-set and 4 sub-command-sets. These exercise most of the functionality and are used by the community for many purposes. However it makes more sense to move some of the entry-points to `docanalysis` and this will occur gradually.

## top-level command

This is really only a placeholder.

```
amilib --help
usage: amilib [-h] [-v] {DICT,PDF,HTML,SEARCH} ...

pyamihtml: create, manipulate, use CProject 
----------------------------------------

amilib is a set of problem-independent methods to support document retrieval and analysis
The subcommands:

  DICT <options>      # create and edit Ami Dictionaries
  HTML <options>      # create/edit HTML
  PDF <options>       # convert PDF into HTML and images
  SEARCH <options>    # search and index documents

After installation, run 
  amilib <subcommand> <options>

Examples (# foo is a comment):
  amilib        # runs help
  amilib -h     # runs help
  amilib PDF -h # runs PDF help
  amilib PDF --infile foo.pdf --outdir bar/ # converts PDF to HTML

----------------------------------------

positional arguments:
  {DICT,PDF,HTML,SEARCH}
                        subcommands

options:
  -h, --help            show this help message and exit
  -v, --version         show version 0.3.0

run:
        pyamihtmlx <subcommand> <args>
          where subcommand is in   {DICT, HTML,PDF, SEARCH} and args depend on subcommand
        
```


### `amilib DICT`

`amilib DICT` is used to make and maintain dictionaries.
```
amilib DICT --help

usage: amilib DICT [-h] [--description {wikipedia,wiktionary,wikidata} [{wikipedia,wiktionary,wikidata} ...]] [--dict DICT]
                   [--inpath INPATH [INPATH ...]] [--figures [{None,wikipedia,wikidata} ...]] [--operation {create,edit,markup,validate}]
                   [--outpath OUTPATH [OUTPATH ...]] [--synonym SYNONYM [SYNONYM ...]] [--title TITLE] [--validate]
                   [--wikidata [WIKIDATA ...]] [--wikipedia [WIKIPEDIA ...]] [--wiktionary [WIKTIONARY ...]] [--words [WORDS ...]]

AMI dictionary creation, validation, editing

options:
  -h, --help            show this help message and exit
  --description {wikipedia,wiktionary,wikidata} [{wikipedia,wiktionary,wikidata} ...]
                        add extended description tp dict from one or more of these
  --dict DICT           path for dictionary (existing = edit; new = create (type depends on suffix *.xml or *.html)
  --inpath INPATH [INPATH ...]
                        path for input file(s)
  --figures [{None,wikipedia,wikidata} ...]
                        sources for figures: 'wikipedia' uses infobox or first thumbnail, wikidata uses first figure
  --operation {create,edit,markup,validate}
                        operation: 'create' needs 'words' 'edit' needs 'inpath' 'markup' need 'inpath' and 'outpath` (move to search?)
                        'validate' requires 'inpath' default = 'create
  --outpath OUTPATH [OUTPATH ...]
                        output file
  --synonym SYNONYM [SYNONYM ...]
                        add sysnonyms (from Wikidata) for terms (NYI)
  --title TITLE         internal title for dictionary, normally same as stem of dictionary file
  --validate            validate dictionary; DEPRECATED use '--operation validate'
  --wikidata [WIKIDATA ...]
                        DEPRECATED use --description wikidata add WikidataIDs (NYI)
  --wikipedia [WIKIPEDIA ...]
                        add Wikipedia link/s; DEPRECATED use '--description wikipedia'
  --wiktionary [WIKTIONARY ...]
                        add Wiktionary output as html (may be messy); DEPRECATED use '--description wiktionary'
  --words [WORDS ...]   path/file with words or list of words to create dictionaray

Examples: DICT --words wordsfile --dict dictfile --description wikipedia # creates dictionary from wordsfile and adds wikipedia info
```

### `amilib HTML`

creates, manages, transforms HTML

```
amilib HTML --help
INFO amix.py:546:***** amilib VERSION 0.3.0 *****
INFO:amilib.amix:***** amilib VERSION 0.3.0 *****
INFO amix.py:170:command: ['HTML', '--help']
INFO:amilib.amix:command: ['HTML', '--help']
usage: amilib HTML [-h] [--annotate] [--color COLOR] [--dict DICT] [--inpath INPATH] [--outpath OUTPATH] [--outdir OUTDIR]

HTML editing, analysing annotation

options:
  -h, --help         show this help message and exit
  --annotate         annotate HTML file with dictionary
  --color COLOR      colour for annotation
  --dict DICT        dictionary for annotation
  --inpath INPATH    input html file
  --outpath OUTPATH  output html file
  --outdir OUTDIR    output directory
```

### `amilib PDF`

Converts PDF to structured HTML. Heuristic.

```
amilib PDF --help
INFO amix.py:546:***** amilib VERSION 0.3.0 *****
INFO:amilib.amix:***** amilib VERSION 0.3.0 *****
INFO amix.py:170:command: ['PDF', '--help']
INFO:amilib.amix:command: ['PDF', '--help']
usage: amilib PDF [-h] [--debug {words,lines,rects,curves,images,tables,hyperlinks,texts,annots}] [--flow FLOW] [--footer FOOTER]
                  [--header HEADER] [--imagedir IMAGEDIR] [--indir INDIR] [--inform INFORM [INFORM ...]] [--inpath INPATH]
                  [--infile INFILE] [--instem INSTEM] [--maxpage MAXPAGE] [--offset OFFSET] [--outdir OUTDIR] [--outpath OUTPATH]
                  [--outstem OUTSTEM] [--outform OUTFORM] [--pdf2html {pdfminer,pdfplumber}] [--pages PAGES [PAGES ...]]
                  [--resolution RESOLUTION] [--template TEMPLATE]

PDF tools. 
----------
Typically reads one or more PDF files and converts to HTML
can clip parts of page, select page ranges, etc.

Examples:
  * PDF --help

options:
  -h, --help            show this help message and exit
  --debug {words,lines,rects,curves,images,tables,hyperlinks,texts,annots}
                        debug these during parsing (NYI)
  --flow FLOW           create flowing HTML, e.g. join lines, pages (heuristics)
  --footer FOOTER       bottom margin (clip everythimg above)
  --header HEADER       top margin (clip everything below
  --imagedir IMAGEDIR   output images to imagedir
  --indir INDIR         input directory (might be calculated from inpath)
  --inform INFORM [INFORM ...]
                        input formats (might be calculated from inpath)
  --inpath INPATH       input file or (NYI) url; might be calculated from dir/stem/form
  --infile INFILE       input file (synonym for inpath)
  --instem INSTEM       input stem (e.g. 'fulltext'); maybe calculated from 'inpath`
  --maxpage MAXPAGE     maximum number of pages (will be deprecated, use 'pages')
  --offset OFFSET       number of pages before numbers page 1, default=0
  --outdir OUTDIR       output directory
  --outpath OUTPATH     output path (can be calculated from dir/stem/form)
  --outstem OUTSTEM     output stem
  --outform OUTFORM     output format
  --pdf2html {pdfminer,pdfplumber}
                        convert PDF to html
  --pages PAGES [PAGES ...]
                        reads '_2 4_6 8 11_' as 1-2, 4-6, 8, 11-end ; all ranges inclusive (not yet debugged)
  --resolution RESOLUTION
                        resolution of output images (if imagedir)
  --template TEMPLATE   file to parse specific type of document (NYI)

```

### `amilib SEARCH`

Searches and annotates HTML documents

```
amilib SEARCH --help
usage: amilib SEARCH [-h] [--debug DEBUG] [--dict DICT] [--inpath INPATH [INPATH ...]]
                     [--operation {annotate,index,no_input_styles} [{annotate,index,no_input_styles} ...]]
                     [--outpath OUTPATH [OUTPATH ...]] [--title TITLE]

SEARCH tools. 
----------
Search documents and corpora and make indexes and maybe knowledge graphs.Not yet finished.

Examples:
  * SEARCH --help

options:
  -h, --help            show this help message and exit
  --debug DEBUG         debug these during parsing (NYI)
  --dict DICT           path for dictionary *.xml or *.html)
  --inpath INPATH [INPATH ...]
                        path for input file(s)
  --operation {annotate,index,no_input_styles} [{annotate,index,no_input_styles} ...]
                        operation: 'no_input_styles' needs 'inpath ; remove styles from inpath 'annotate' needs 'inpath and dict';
                        annotates words/phrases 'index' needs 'inpath' optionally outpath (NYI) default = annotate
  --outpath OUTPATH [OUTPATH ...]
                        output file
  --title TITLE         internal title for dictionary, normally same as stem of dictionary file

```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/petermr/amilib",
    "name": "amilib",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.9",
    "maintainer_email": null,
    "keywords": "text and data mining",
    "author": "Peter Murray-Rust",
    "author_email": "petermurrayrust@googlemail.com",
    "download_url": "https://files.pythonhosted.org/packages/fd/94/da4c518f16df09941e11ea0e4399b808d8abe2c18b4ab2e964ab850aa962/amilib-0.5.3.tar.gz",
    "platform": null,
    "description": "# amilib\n\nLibrary to support downloading and analysis of documents, mainly from Open Access repositories, from \npublished scholarly articles, or from authoritative sites such as UN IPCC or UNFCCC. The current version 2024-12-01 includes entry points but the longer term plan is to integrate with [docanalysis](https://github.com/petermr/docanalysis) and [pygetpapers](https://github.com/petermr/pygetpapers). That will give a onestop toolset for downloading Open Access articles/reports in bulk, making them semantic , and analysing them with NLP and AI/ML methods.\n\n# components and tests\n\n`amilib` is written as a set of libraries developed by test-driven-development (TDD). \nThe main strategy is to tackle a real download/transform/analyse problem as a \nseries of tests, and then abstract the tests into the library. \nThe tests therefore act as a guide to functionality and simple how-tos. \nDuring development the libraries can be accessed through the command-line (`argparse`) \nand this is the current normal approach. \n(However we plan to move the main entry points for most users to `docanalysis`).\n\n# main sub-libraries\n\nThis represents functionality at 2024-12-02. There are about 1000 non-trivial methods.\n\n|Module|Function  |\n|-------|----  |\n|[amilib/ami_args.py](amilib/ami_args.py)|Abstract class for argparse option  |\n|ami_bib.py|Bibliographic support  |\n|ami_corpus.py|create, normalize, search, tranform a corpus  |\n|ami_csv.py|CSV utilities  |\n|ami_dict.py| Ami Dictionary|\n|ami_graph.py| (stub)  |\n|ami_html.py|large collection for manipulating HTML  |\n|ami_integrate.py| miscellaneous conversions |\n|ami_nlp.py| (stubs) |\n|ami_pdf_libs.py|large PDF2HTML , includes pdfplumber |\n|ami_svg.py| (stub) mainly convenience routiines |\n|ami_util.py| lowlevel numeric/geom utilities |\n|amidriver.py| headless browser |\n|amix.py|  entry point for amilib main |\n|bbox.py| bounding box manipulation |\n|constants.py| (stub) |\n|dict_args.py| dictionary options in argparse |\n|file_lib.py| file utilities |\n|headless_lib.py|  messy utility routines (may be obsolete)|\n|html_args.py| HTML options in argparse |\n|html_extra.py| stub obsolete? |\n|html_generator.py|  messy ?obsolete|\n|html_marker.py| markup HTML (messy) |\n|pdf_args.py|  PDF options in argprase|\n|search_args.py| search options in argparse |\n|util.py| many scattered utilities |\n|wikimedia.py| download and conversion of Wikimedia|\n|xml_lib.py| convenience routies (lxml wrappers)|\n\n\n# commands and subcommands\n\n`amilib` has an `argparse` dommand-set and 4 sub-command-sets. These exercise most of the functionality and are used by the community for many purposes. However it makes more sense to move some of the entry-points to `docanalysis` and this will occur gradually.\n\n## top-level command\n\nThis is really only a placeholder.\n\n```\namilib --help\nusage: amilib [-h] [-v] {DICT,PDF,HTML,SEARCH} ...\n\npyamihtml: create, manipulate, use CProject \n----------------------------------------\n\namilib is a set of problem-independent methods to support document retrieval and analysis\nThe subcommands:\n\n  DICT <options>      # create and edit Ami Dictionaries\n  HTML <options>      # create/edit HTML\n  PDF <options>       # convert PDF into HTML and images\n  SEARCH <options>    # search and index documents\n\nAfter installation, run \n  amilib <subcommand> <options>\n\nExamples (# foo is a comment):\n  amilib        # runs help\n  amilib -h     # runs help\n  amilib PDF -h # runs PDF help\n  amilib PDF --infile foo.pdf --outdir bar/ # converts PDF to HTML\n\n----------------------------------------\n\npositional arguments:\n  {DICT,PDF,HTML,SEARCH}\n                        subcommands\n\noptions:\n  -h, --help            show this help message and exit\n  -v, --version         show version 0.3.0\n\nrun:\n        pyamihtmlx <subcommand> <args>\n          where subcommand is in   {DICT, HTML,PDF, SEARCH} and args depend on subcommand\n        \n```\n\n\n### `amilib DICT`\n\n`amilib DICT` is used to make and maintain dictionaries.\n```\namilib DICT --help\n\nusage: amilib DICT [-h] [--description {wikipedia,wiktionary,wikidata} [{wikipedia,wiktionary,wikidata} ...]] [--dict DICT]\n                   [--inpath INPATH [INPATH ...]] [--figures [{None,wikipedia,wikidata} ...]] [--operation {create,edit,markup,validate}]\n                   [--outpath OUTPATH [OUTPATH ...]] [--synonym SYNONYM [SYNONYM ...]] [--title TITLE] [--validate]\n                   [--wikidata [WIKIDATA ...]] [--wikipedia [WIKIPEDIA ...]] [--wiktionary [WIKTIONARY ...]] [--words [WORDS ...]]\n\nAMI dictionary creation, validation, editing\n\noptions:\n  -h, --help            show this help message and exit\n  --description {wikipedia,wiktionary,wikidata} [{wikipedia,wiktionary,wikidata} ...]\n                        add extended description tp dict from one or more of these\n  --dict DICT           path for dictionary (existing = edit; new = create (type depends on suffix *.xml or *.html)\n  --inpath INPATH [INPATH ...]\n                        path for input file(s)\n  --figures [{None,wikipedia,wikidata} ...]\n                        sources for figures: 'wikipedia' uses infobox or first thumbnail, wikidata uses first figure\n  --operation {create,edit,markup,validate}\n                        operation: 'create' needs 'words' 'edit' needs 'inpath' 'markup' need 'inpath' and 'outpath` (move to search?)\n                        'validate' requires 'inpath' default = 'create\n  --outpath OUTPATH [OUTPATH ...]\n                        output file\n  --synonym SYNONYM [SYNONYM ...]\n                        add sysnonyms (from Wikidata) for terms (NYI)\n  --title TITLE         internal title for dictionary, normally same as stem of dictionary file\n  --validate            validate dictionary; DEPRECATED use '--operation validate'\n  --wikidata [WIKIDATA ...]\n                        DEPRECATED use --description wikidata add WikidataIDs (NYI)\n  --wikipedia [WIKIPEDIA ...]\n                        add Wikipedia link/s; DEPRECATED use '--description wikipedia'\n  --wiktionary [WIKTIONARY ...]\n                        add Wiktionary output as html (may be messy); DEPRECATED use '--description wiktionary'\n  --words [WORDS ...]   path/file with words or list of words to create dictionaray\n\nExamples: DICT --words wordsfile --dict dictfile --description wikipedia # creates dictionary from wordsfile and adds wikipedia info\n```\n\n### `amilib HTML`\n\ncreates, manages, transforms HTML\n\n```\namilib HTML --help\nINFO amix.py:546:***** amilib VERSION 0.3.0 *****\nINFO:amilib.amix:***** amilib VERSION 0.3.0 *****\nINFO amix.py:170:command: ['HTML', '--help']\nINFO:amilib.amix:command: ['HTML', '--help']\nusage: amilib HTML [-h] [--annotate] [--color COLOR] [--dict DICT] [--inpath INPATH] [--outpath OUTPATH] [--outdir OUTDIR]\n\nHTML editing, analysing annotation\n\noptions:\n  -h, --help         show this help message and exit\n  --annotate         annotate HTML file with dictionary\n  --color COLOR      colour for annotation\n  --dict DICT        dictionary for annotation\n  --inpath INPATH    input html file\n  --outpath OUTPATH  output html file\n  --outdir OUTDIR    output directory\n```\n\n### `amilib PDF`\n\nConverts PDF to structured HTML. Heuristic.\n\n```\namilib PDF --help\nINFO amix.py:546:***** amilib VERSION 0.3.0 *****\nINFO:amilib.amix:***** amilib VERSION 0.3.0 *****\nINFO amix.py:170:command: ['PDF', '--help']\nINFO:amilib.amix:command: ['PDF', '--help']\nusage: amilib PDF [-h] [--debug {words,lines,rects,curves,images,tables,hyperlinks,texts,annots}] [--flow FLOW] [--footer FOOTER]\n                  [--header HEADER] [--imagedir IMAGEDIR] [--indir INDIR] [--inform INFORM [INFORM ...]] [--inpath INPATH]\n                  [--infile INFILE] [--instem INSTEM] [--maxpage MAXPAGE] [--offset OFFSET] [--outdir OUTDIR] [--outpath OUTPATH]\n                  [--outstem OUTSTEM] [--outform OUTFORM] [--pdf2html {pdfminer,pdfplumber}] [--pages PAGES [PAGES ...]]\n                  [--resolution RESOLUTION] [--template TEMPLATE]\n\nPDF tools. \n----------\nTypically reads one or more PDF files and converts to HTML\ncan clip parts of page, select page ranges, etc.\n\nExamples:\n  * PDF --help\n\noptions:\n  -h, --help            show this help message and exit\n  --debug {words,lines,rects,curves,images,tables,hyperlinks,texts,annots}\n                        debug these during parsing (NYI)\n  --flow FLOW           create flowing HTML, e.g. join lines, pages (heuristics)\n  --footer FOOTER       bottom margin (clip everythimg above)\n  --header HEADER       top margin (clip everything below\n  --imagedir IMAGEDIR   output images to imagedir\n  --indir INDIR         input directory (might be calculated from inpath)\n  --inform INFORM [INFORM ...]\n                        input formats (might be calculated from inpath)\n  --inpath INPATH       input file or (NYI) url; might be calculated from dir/stem/form\n  --infile INFILE       input file (synonym for inpath)\n  --instem INSTEM       input stem (e.g. 'fulltext'); maybe calculated from 'inpath`\n  --maxpage MAXPAGE     maximum number of pages (will be deprecated, use 'pages')\n  --offset OFFSET       number of pages before numbers page 1, default=0\n  --outdir OUTDIR       output directory\n  --outpath OUTPATH     output path (can be calculated from dir/stem/form)\n  --outstem OUTSTEM     output stem\n  --outform OUTFORM     output format\n  --pdf2html {pdfminer,pdfplumber}\n                        convert PDF to html\n  --pages PAGES [PAGES ...]\n                        reads '_2 4_6 8 11_' as 1-2, 4-6, 8, 11-end ; all ranges inclusive (not yet debugged)\n  --resolution RESOLUTION\n                        resolution of output images (if imagedir)\n  --template TEMPLATE   file to parse specific type of document (NYI)\n\n```\n\n### `amilib SEARCH`\n\nSearches and annotates HTML documents\n\n```\namilib SEARCH --help\nusage: amilib SEARCH [-h] [--debug DEBUG] [--dict DICT] [--inpath INPATH [INPATH ...]]\n                     [--operation {annotate,index,no_input_styles} [{annotate,index,no_input_styles} ...]]\n                     [--outpath OUTPATH [OUTPATH ...]] [--title TITLE]\n\nSEARCH tools. \n----------\nSearch documents and corpora and make indexes and maybe knowledge graphs.Not yet finished.\n\nExamples:\n  * SEARCH --help\n\noptions:\n  -h, --help            show this help message and exit\n  --debug DEBUG         debug these during parsing (NYI)\n  --dict DICT           path for dictionary *.xml or *.html)\n  --inpath INPATH [INPATH ...]\n                        path for input file(s)\n  --operation {annotate,index,no_input_styles} [{annotate,index,no_input_styles} ...]\n                        operation: 'no_input_styles' needs 'inpath ; remove styles from inpath 'annotate' needs 'inpath and dict';\n                        annotates words/phrases 'index' needs 'inpath' optionally outpath (NYI) default = annotate\n  --outpath OUTPATH [OUTPATH ...]\n                        output file\n  --title TITLE         internal title for dictionary, normally same as stem of dictionary file\n\n```\n\n\n",
    "bugtrack_url": null,
    "license": "Apache2",
    "summary": "Document and dictionary download, cleaning, management",
    "version": "0.5.3",
    "project_urls": {
        "Homepage": "https://github.com/petermr/amilib"
    },
    "split_keywords": [
        "text",
        "and",
        "data",
        "mining"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1b9c58f23b5448cbfabad4022460ba0b5841ea86dd66b47607fee4525024122e",
                "md5": "36211203001cdb69c46b443c41cfc067",
                "sha256": "bd70655d6095ec0bf6864c9d99f3885571801987c6d2240450cf41846b8f409d"
            },
            "downloads": -1,
            "filename": "amilib-0.5.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "36211203001cdb69c46b443c41cfc067",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.9",
            "size": 410925,
            "upload_time": "2025-05-06T08:19:19",
            "upload_time_iso_8601": "2025-05-06T08:19:19.901428Z",
            "url": "https://files.pythonhosted.org/packages/1b/9c/58f23b5448cbfabad4022460ba0b5841ea86dd66b47607fee4525024122e/amilib-0.5.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fd94da4c518f16df09941e11ea0e4399b808d8abe2c18b4ab2e964ab850aa962",
                "md5": "ba4231ec7428f8cdf8b64dbe375fac61",
                "sha256": "a20c376df9af9fcf59a7d7c9fdf0533ba4404da5e9e84bf44af7a357710ed833"
            },
            "downloads": -1,
            "filename": "amilib-0.5.3.tar.gz",
            "has_sig": false,
            "md5_digest": "ba4231ec7428f8cdf8b64dbe375fac61",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.9",
            "size": 391915,
            "upload_time": "2025-05-06T08:19:21",
            "upload_time_iso_8601": "2025-05-06T08:19:21.257578Z",
            "url": "https://files.pythonhosted.org/packages/fd/94/da4c518f16df09941e11ea0e4399b808d8abe2c18b4ab2e964ab850aa962/amilib-0.5.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-05-06 08:19:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "petermr",
    "github_project": "amilib",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "chardet",
            "specs": [
                [
                    "~=",
                    "5.2.0"
                ]
            ]
        },
        {
            "name": "graphviz",
            "specs": [
                [
                    "~=",
                    "0.20.3"
                ]
            ]
        },
        {
            "name": "lxml",
            "specs": [
                [
                    "~=",
                    "5.3.0"
                ]
            ]
        },
        {
            "name": "networkx",
            "specs": [
                [
                    "~=",
                    "3.4.2"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "~=",
                    "2.2.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "~=",
                    "2.2.3"
                ]
            ]
        },
        {
            "name": "pdfminer.six",
            "specs": []
        },
        {
            "name": "pdfplumber",
            "specs": [
                [
                    "~=",
                    "0.11.4"
                ]
            ]
        },
        {
            "name": "Pillow",
            "specs": [
                [
                    "~=",
                    "11.0.0"
                ]
            ]
        },
        {
            "name": "PyMuPDF",
            "specs": [
                [
                    "~=",
                    "1.24.11"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "~=",
                    "8.3.3"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "~=",
                    "2.32.3"
                ]
            ]
        },
        {
            "name": "selenium",
            "specs": [
                [
                    "~=",
                    "4.25.0"
                ]
            ]
        },
        {
            "name": "setuptools",
            "specs": [
                [
                    "~=",
                    "75.8.0"
                ]
            ]
        },
        {
            "name": "SPARQLWrapper",
            "specs": [
                [
                    "~=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "webdriver-manager",
            "specs": []
        }
    ],
    "lcname": "amilib"
}
        
Elapsed time: 3.01352s