# Unified Document Analysis
A thin wrapper providing a single entry point for all document analysis frameworks with automatic routing based on file type.
## Overview
This package provides a unified interface for analyzing any document type by automatically routing files to the appropriate specialized framework:
- **xml-analysis-framework** - For XML files (S1000D, DITA, etc.)
- **docling-analysis-framework** - For PDFs, Office documents, and images
- **document-analysis-framework** - For code, text, and configuration files
- **data-analysis-framework** - For CSV, Parquet, databases, and data files
## Why Use This?
Instead of learning and managing 4 different frameworks, you get:
- **Single API** - One `analyze()` and `chunk()` function for everything
- **Automatic routing** - File type detection happens automatically
- **Lazy loading** - Only loads frameworks when needed
- **Minimal dependencies** - Install only what you need via optional dependencies
- **Helpful errors** - Clear messages when frameworks are missing
## Installation
Install the base package:
```bash
pip install unified-document-analysis
```
Then install the frameworks you need:
```bash
# Individual frameworks
pip install unified-document-analysis[xml]
pip install unified-document-analysis[docling]
pip install unified-document-analysis[document]
pip install unified-document-analysis[data]
# Lightweight set (xml + document, no heavy dependencies)
pip install unified-document-analysis[lightweight]
# Everything
pip install unified-document-analysis[all]
```
## Quick Start
### Basic Usage
```python
from unified_document_analysis import analyze, chunk
# Analyze any file - automatically detects type and routes to correct framework
result = analyze('technical_manual.xml')
result = analyze('report.pdf')
result = analyze('data.csv')
result = analyze('config.py')
# Chunk the file
chunks = chunk('report.pdf', result, strategy="semantic")
```
### Using the UnifiedAnalyzer Class
```python
from unified_document_analysis import UnifiedAnalyzer
analyzer = UnifiedAnalyzer()
# Check what's installed
print(analyzer.get_available_frameworks())
# Output: ['xml', 'document', 'data']
# Detect framework for a file (without analyzing)
info = analyzer.detect_framework_for_file('document.json')
print(f"Framework: {info['framework']}, Confidence: {info['confidence']}")
# Analyze with optional framework hint (for ambiguous files)
result = analyzer.analyze('ambiguous.json', framework_hint='data')
# Get framework information
info = analyzer.get_framework_info('xml')
print(f"Installed: {info['installed']}")
print(f"Extensions: {info['extensions']}")
```
### Advanced Features
```python
from unified_document_analysis import (
UnifiedAnalyzer,
get_available_frameworks,
get_supported_extensions,
detect_framework_for_file
)
# Check what frameworks are installed
frameworks = get_available_frameworks()
print(f"Available: {frameworks}")
# Get all supported extensions
all_exts = get_supported_extensions()
print(f"XML extensions: {all_exts['xml']}")
print(f"Docling extensions: {all_exts['docling']}")
# Detect framework for a file
info = detect_framework_for_file('document.yaml')
if info['is_ambiguous']:
print(f"Ambiguous file. Alternatives: {info['alternatives']}")
```
## Framework Routing Table
| File Type | Extensions | Framework Used |
|-----------|-----------|----------------|
| **XML** | `.xml` | xml-analysis-framework |
| **PDFs** | `.pdf` | docling-analysis-framework |
| **Office** | `.docx`, `.pptx`, `.xlsx` | docling-analysis-framework |
| **Images** | `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp` | docling-analysis-framework |
| **Data** | `.csv`, `.parquet`, `.db`, `.sqlite` | data-analysis-framework |
| **Code** | `.py`, `.js`, `.ts`, `.java`, `.c`, etc. | document-analysis-framework |
| **Text** | `.md`, `.txt`, `.rst`, `.tex` | document-analysis-framework |
| **Config** | `.json`, `.yaml`, `.toml`, `.ini` | document-analysis-framework |
### Ambiguous File Types
Some extensions could belong to multiple frameworks. By default:
- `.json` → document-analysis-framework (confidence: 0.7)
- `.yaml`/`.yml` → document-analysis-framework (confidence: 0.7)
Use `framework_hint` to override:
```python
# Treat JSON as data instead of document
result = analyze('data.json', framework_hint='data')
```
## API Reference
### Main Functions
#### `analyze(file_path, framework_hint=None, **kwargs)`
Analyze any supported file type.
**Args:**
- `file_path` (str): Path to file to analyze
- `framework_hint` (str, optional): Force specific framework ('xml', 'docling', 'document', 'data')
- `**kwargs`: Additional arguments passed to framework's analyze method
**Returns:** Analysis result from appropriate framework
**Raises:**
- `FrameworkNotInstalledError`: If required framework is not installed
- `UnsupportedFileTypeError`: If file type is not supported
- `AnalysisError`: If analysis fails
#### `chunk(file_path, analysis_result, strategy="auto", framework_hint=None, **kwargs)`
Chunk a file based on its analysis result.
**Args:**
- `file_path` (str): Path to file to chunk
- `analysis_result`: Analysis result from `analyze()`
- `strategy` (str): Chunking strategy (framework-specific)
- `framework_hint` (str, optional): Force specific framework
- `**kwargs`: Additional arguments passed to framework's chunk method
**Returns:** List of chunks from appropriate framework
#### `get_available_frameworks()`
Get list of installed frameworks.
**Returns:** List of framework names (e.g., `['xml', 'document']`)
#### `detect_framework_for_file(file_path, hint=None)`
Detect which framework would be used for a file.
**Returns:** Dictionary with:
- `framework`: Detected framework name
- `confidence`: Confidence score (0.0-1.0)
- `is_ambiguous`: Whether file type is ambiguous
- `alternatives`: List of alternative frameworks for ambiguous types
- `installed`: Whether framework is installed
### UnifiedAnalyzer Class
#### Methods
- `analyze(file_path, framework_hint=None, **kwargs)` - Analyze a file
- `chunk(file_path, analysis_result, strategy="auto", **kwargs)` - Chunk a file
- `get_available_frameworks()` - Get installed frameworks
- `get_framework_info(framework_name)` - Get framework details
- `detect_framework_for_file(file_path, hint=None)` - Detect framework
- `get_supported_extensions(framework=None)` - Get supported extensions
## Error Handling
The package provides helpful error messages:
### Framework Not Installed
```python
# If you try to analyze a PDF but docling framework is not installed
try:
result = analyze('document.pdf')
except FrameworkNotInstalledError as e:
print(e)
# Output:
# The 'docling' framework is required to process 'document.pdf'
# but is not installed.
#
# Install it with:
# pip install unified-document-analysis[docling]
#
# Or install all frameworks:
# pip install unified-document-analysis[all]
```
### Unsupported File Type
```python
try:
result = analyze('document.unknown')
except UnsupportedFileTypeError as e:
print(e)
# Provides list of all supported file types
```
## When to Use What
### Use `unified-document-analysis` when:
- You're building an application that needs to handle multiple file types
- You want a simple API that "just works"
- You want to minimize dependencies by installing only needed frameworks
- You're prototyping and want quick file analysis
### Use individual frameworks when:
- You only need one framework (e.g., only XML files)
- You need advanced framework-specific features
- You want maximum control over configuration
## How It Works
### Lazy Loading
Frameworks are only imported when needed:
```python
# Only imports base package (lightweight)
from unified_document_analysis import analyze
# Framework only loaded when analyze() is called
result = analyze('document.pdf') # Now docling is imported
```
### Smart Routing
The router examines file extensions and routes to the appropriate framework:
1. Get file extension
2. Look up extension in routing table
3. Dynamically import framework module
4. Call framework's analyze/chunk methods
5. Return results
### Optional Dependencies
```toml
[project.optional-dependencies]
xml = ["xml-analysis-framework>=2.0.0"]
docling = ["docling-analysis-framework>=2.0.0"]
document = ["document-analysis-framework>=2.0.0"]
data = ["data-analysis-framework>=2.0.0"]
```
## Examples
### Multi-Format Document Pipeline
```python
from unified_document_analysis import analyze, chunk
def process_documents(file_paths):
"""Process multiple document types in a single pipeline."""
results = []
for path in file_paths:
# Analyze (auto-routes to correct framework)
analysis = analyze(path)
# Chunk (uses same framework)
chunks = chunk(path, analysis, strategy="semantic")
results.append({
'path': path,
'analysis': analysis,
'chunks': chunks
})
return results
# Works with any mix of file types
files = [
'manual.xml',
'report.pdf',
'data.csv',
'config.py'
]
results = process_documents(files)
```
### Check Before Processing
```python
from unified_document_analysis import detect_framework_for_file, get_available_frameworks
def can_process_file(file_path):
"""Check if a file can be processed with installed frameworks."""
info = detect_framework_for_file(file_path)
available = get_available_frameworks()
if not info['installed']:
print(f"Cannot process {file_path}: {info['framework']} framework not installed")
return False
if info['is_ambiguous']:
print(f"Ambiguous file type. Will use {info['framework']} framework.")
print(f"Alternatives: {info['alternatives']}")
return True
# Check before processing
if can_process_file('document.json'):
result = analyze('document.json')
```
### Custom Framework Selection
```python
from unified_document_analysis import UnifiedAnalyzer
analyzer = UnifiedAnalyzer()
# Override automatic detection for JSON data files
json_files = ['data1.json', 'data2.json']
for file_path in json_files:
# Force data framework instead of document framework
result = analyzer.analyze(file_path, framework_hint='data')
print(f"Processed {file_path} as data")
```
## Contributing
Contributions welcome! Please submit issues and pull requests on GitHub.
Repository: https://github.com/rdwj/unified-document-analysis
## License
Apache License 2.0
## Related Projects
- [analysis-framework-base](https://github.com/rdwj/analysis-framework-base) - Base interfaces and types
- [xml-analysis-framework](https://github.com/rdwj/xml-analysis-framework) - XML document analysis
- [docling-analysis-framework](https://github.com/rdwj/docling-analysis-framework) - PDF/Office analysis
- [document-analysis-framework](https://github.com/rdwj/document-analysis-framework) - Code/text analysis
- [data-analysis-framework](https://github.com/rdwj/data-analysis-framework) - Data file analysis
## Support
For issues, questions, or contributions, please visit:
https://github.com/rdwj/unified-document-analysis/issues
Raw data
{
"_id": null,
"home_page": null,
"name": "unified-document-analysis",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, document-analysis, xml, pdf, data-analysis, unified-interface",
"author": null,
"author_email": "Red Hat AI Americas <noreply@redhat.com>",
"download_url": "https://files.pythonhosted.org/packages/e1/f6/eba1b59c407e7cd5b503051099e8e3ccd6a37a71bf17d6e9f8ecd5fbdcaa/unified_document_analysis-1.0.1.tar.gz",
"platform": null,
"description": "# Unified Document Analysis\n\nA thin wrapper providing a single entry point for all document analysis frameworks with automatic routing based on file type.\n\n## Overview\n\nThis package provides a unified interface for analyzing any document type by automatically routing files to the appropriate specialized framework:\n\n- **xml-analysis-framework** - For XML files (S1000D, DITA, etc.)\n- **docling-analysis-framework** - For PDFs, Office documents, and images\n- **document-analysis-framework** - For code, text, and configuration files\n- **data-analysis-framework** - For CSV, Parquet, databases, and data files\n\n## Why Use This?\n\nInstead of learning and managing 4 different frameworks, you get:\n\n- **Single API** - One `analyze()` and `chunk()` function for everything\n- **Automatic routing** - File type detection happens automatically\n- **Lazy loading** - Only loads frameworks when needed\n- **Minimal dependencies** - Install only what you need via optional dependencies\n- **Helpful errors** - Clear messages when frameworks are missing\n\n## Installation\n\nInstall the base package:\n\n```bash\npip install unified-document-analysis\n```\n\nThen install the frameworks you need:\n\n```bash\n# Individual frameworks\npip install unified-document-analysis[xml]\npip install unified-document-analysis[docling]\npip install unified-document-analysis[document]\npip install unified-document-analysis[data]\n\n# Lightweight set (xml + document, no heavy dependencies)\npip install unified-document-analysis[lightweight]\n\n# Everything\npip install unified-document-analysis[all]\n```\n\n## Quick Start\n\n### Basic Usage\n\n```python\nfrom unified_document_analysis import analyze, chunk\n\n# Analyze any file - automatically detects type and routes to correct framework\nresult = analyze('technical_manual.xml')\nresult = analyze('report.pdf')\nresult = analyze('data.csv')\nresult = analyze('config.py')\n\n# Chunk the file\nchunks = chunk('report.pdf', result, strategy=\"semantic\")\n```\n\n### Using the UnifiedAnalyzer Class\n\n```python\nfrom unified_document_analysis import UnifiedAnalyzer\n\nanalyzer = UnifiedAnalyzer()\n\n# Check what's installed\nprint(analyzer.get_available_frameworks())\n# Output: ['xml', 'document', 'data']\n\n# Detect framework for a file (without analyzing)\ninfo = analyzer.detect_framework_for_file('document.json')\nprint(f\"Framework: {info['framework']}, Confidence: {info['confidence']}\")\n\n# Analyze with optional framework hint (for ambiguous files)\nresult = analyzer.analyze('ambiguous.json', framework_hint='data')\n\n# Get framework information\ninfo = analyzer.get_framework_info('xml')\nprint(f\"Installed: {info['installed']}\")\nprint(f\"Extensions: {info['extensions']}\")\n```\n\n### Advanced Features\n\n```python\nfrom unified_document_analysis import (\n UnifiedAnalyzer,\n get_available_frameworks,\n get_supported_extensions,\n detect_framework_for_file\n)\n\n# Check what frameworks are installed\nframeworks = get_available_frameworks()\nprint(f\"Available: {frameworks}\")\n\n# Get all supported extensions\nall_exts = get_supported_extensions()\nprint(f\"XML extensions: {all_exts['xml']}\")\nprint(f\"Docling extensions: {all_exts['docling']}\")\n\n# Detect framework for a file\ninfo = detect_framework_for_file('document.yaml')\nif info['is_ambiguous']:\n print(f\"Ambiguous file. Alternatives: {info['alternatives']}\")\n```\n\n## Framework Routing Table\n\n| File Type | Extensions | Framework Used |\n|-----------|-----------|----------------|\n| **XML** | `.xml` | xml-analysis-framework |\n| **PDFs** | `.pdf` | docling-analysis-framework |\n| **Office** | `.docx`, `.pptx`, `.xlsx` | docling-analysis-framework |\n| **Images** | `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp` | docling-analysis-framework |\n| **Data** | `.csv`, `.parquet`, `.db`, `.sqlite` | data-analysis-framework |\n| **Code** | `.py`, `.js`, `.ts`, `.java`, `.c`, etc. | document-analysis-framework |\n| **Text** | `.md`, `.txt`, `.rst`, `.tex` | document-analysis-framework |\n| **Config** | `.json`, `.yaml`, `.toml`, `.ini` | document-analysis-framework |\n\n### Ambiguous File Types\n\nSome extensions could belong to multiple frameworks. By default:\n- `.json` \u2192 document-analysis-framework (confidence: 0.7)\n- `.yaml`/`.yml` \u2192 document-analysis-framework (confidence: 0.7)\n\nUse `framework_hint` to override:\n\n```python\n# Treat JSON as data instead of document\nresult = analyze('data.json', framework_hint='data')\n```\n\n## API Reference\n\n### Main Functions\n\n#### `analyze(file_path, framework_hint=None, **kwargs)`\n\nAnalyze any supported file type.\n\n**Args:**\n- `file_path` (str): Path to file to analyze\n- `framework_hint` (str, optional): Force specific framework ('xml', 'docling', 'document', 'data')\n- `**kwargs`: Additional arguments passed to framework's analyze method\n\n**Returns:** Analysis result from appropriate framework\n\n**Raises:**\n- `FrameworkNotInstalledError`: If required framework is not installed\n- `UnsupportedFileTypeError`: If file type is not supported\n- `AnalysisError`: If analysis fails\n\n#### `chunk(file_path, analysis_result, strategy=\"auto\", framework_hint=None, **kwargs)`\n\nChunk a file based on its analysis result.\n\n**Args:**\n- `file_path` (str): Path to file to chunk\n- `analysis_result`: Analysis result from `analyze()`\n- `strategy` (str): Chunking strategy (framework-specific)\n- `framework_hint` (str, optional): Force specific framework\n- `**kwargs`: Additional arguments passed to framework's chunk method\n\n**Returns:** List of chunks from appropriate framework\n\n#### `get_available_frameworks()`\n\nGet list of installed frameworks.\n\n**Returns:** List of framework names (e.g., `['xml', 'document']`)\n\n#### `detect_framework_for_file(file_path, hint=None)`\n\nDetect which framework would be used for a file.\n\n**Returns:** Dictionary with:\n- `framework`: Detected framework name\n- `confidence`: Confidence score (0.0-1.0)\n- `is_ambiguous`: Whether file type is ambiguous\n- `alternatives`: List of alternative frameworks for ambiguous types\n- `installed`: Whether framework is installed\n\n### UnifiedAnalyzer Class\n\n#### Methods\n\n- `analyze(file_path, framework_hint=None, **kwargs)` - Analyze a file\n- `chunk(file_path, analysis_result, strategy=\"auto\", **kwargs)` - Chunk a file\n- `get_available_frameworks()` - Get installed frameworks\n- `get_framework_info(framework_name)` - Get framework details\n- `detect_framework_for_file(file_path, hint=None)` - Detect framework\n- `get_supported_extensions(framework=None)` - Get supported extensions\n\n## Error Handling\n\nThe package provides helpful error messages:\n\n### Framework Not Installed\n\n```python\n# If you try to analyze a PDF but docling framework is not installed\ntry:\n result = analyze('document.pdf')\nexcept FrameworkNotInstalledError as e:\n print(e)\n # Output:\n # The 'docling' framework is required to process 'document.pdf'\n # but is not installed.\n #\n # Install it with:\n # pip install unified-document-analysis[docling]\n #\n # Or install all frameworks:\n # pip install unified-document-analysis[all]\n```\n\n### Unsupported File Type\n\n```python\ntry:\n result = analyze('document.unknown')\nexcept UnsupportedFileTypeError as e:\n print(e)\n # Provides list of all supported file types\n```\n\n## When to Use What\n\n### Use `unified-document-analysis` when:\n- You're building an application that needs to handle multiple file types\n- You want a simple API that \"just works\"\n- You want to minimize dependencies by installing only needed frameworks\n- You're prototyping and want quick file analysis\n\n### Use individual frameworks when:\n- You only need one framework (e.g., only XML files)\n- You need advanced framework-specific features\n- You want maximum control over configuration\n\n## How It Works\n\n### Lazy Loading\n\nFrameworks are only imported when needed:\n\n```python\n# Only imports base package (lightweight)\nfrom unified_document_analysis import analyze\n\n# Framework only loaded when analyze() is called\nresult = analyze('document.pdf') # Now docling is imported\n```\n\n### Smart Routing\n\nThe router examines file extensions and routes to the appropriate framework:\n\n1. Get file extension\n2. Look up extension in routing table\n3. Dynamically import framework module\n4. Call framework's analyze/chunk methods\n5. Return results\n\n### Optional Dependencies\n\n```toml\n[project.optional-dependencies]\nxml = [\"xml-analysis-framework>=2.0.0\"]\ndocling = [\"docling-analysis-framework>=2.0.0\"]\ndocument = [\"document-analysis-framework>=2.0.0\"]\ndata = [\"data-analysis-framework>=2.0.0\"]\n```\n\n## Examples\n\n### Multi-Format Document Pipeline\n\n```python\nfrom unified_document_analysis import analyze, chunk\n\ndef process_documents(file_paths):\n \"\"\"Process multiple document types in a single pipeline.\"\"\"\n results = []\n\n for path in file_paths:\n # Analyze (auto-routes to correct framework)\n analysis = analyze(path)\n\n # Chunk (uses same framework)\n chunks = chunk(path, analysis, strategy=\"semantic\")\n\n results.append({\n 'path': path,\n 'analysis': analysis,\n 'chunks': chunks\n })\n\n return results\n\n# Works with any mix of file types\nfiles = [\n 'manual.xml',\n 'report.pdf',\n 'data.csv',\n 'config.py'\n]\n\nresults = process_documents(files)\n```\n\n### Check Before Processing\n\n```python\nfrom unified_document_analysis import detect_framework_for_file, get_available_frameworks\n\ndef can_process_file(file_path):\n \"\"\"Check if a file can be processed with installed frameworks.\"\"\"\n info = detect_framework_for_file(file_path)\n available = get_available_frameworks()\n\n if not info['installed']:\n print(f\"Cannot process {file_path}: {info['framework']} framework not installed\")\n return False\n\n if info['is_ambiguous']:\n print(f\"Ambiguous file type. Will use {info['framework']} framework.\")\n print(f\"Alternatives: {info['alternatives']}\")\n\n return True\n\n# Check before processing\nif can_process_file('document.json'):\n result = analyze('document.json')\n```\n\n### Custom Framework Selection\n\n```python\nfrom unified_document_analysis import UnifiedAnalyzer\n\nanalyzer = UnifiedAnalyzer()\n\n# Override automatic detection for JSON data files\njson_files = ['data1.json', 'data2.json']\n\nfor file_path in json_files:\n # Force data framework instead of document framework\n result = analyzer.analyze(file_path, framework_hint='data')\n print(f\"Processed {file_path} as data\")\n```\n\n## Contributing\n\nContributions welcome! Please submit issues and pull requests on GitHub.\n\nRepository: https://github.com/rdwj/unified-document-analysis\n\n## License\n\nApache License 2.0\n\n## Related Projects\n\n- [analysis-framework-base](https://github.com/rdwj/analysis-framework-base) - Base interfaces and types\n- [xml-analysis-framework](https://github.com/rdwj/xml-analysis-framework) - XML document analysis\n- [docling-analysis-framework](https://github.com/rdwj/docling-analysis-framework) - PDF/Office analysis\n- [document-analysis-framework](https://github.com/rdwj/document-analysis-framework) - Code/text analysis\n- [data-analysis-framework](https://github.com/rdwj/data-analysis-framework) - Data file analysis\n\n## Support\n\nFor issues, questions, or contributions, please visit:\nhttps://github.com/rdwj/unified-document-analysis/issues\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Unified interface for document analysis frameworks - automatically routes to xml, docling, document, or data analysis frameworks",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/rdwj/unified-document-analysis/issues",
"Homepage": "https://github.com/rdwj/unified-document-analysis",
"Repository": "https://github.com/rdwj/unified-document-analysis"
},
"split_keywords": [
"ai",
" document-analysis",
" xml",
" pdf",
" data-analysis",
" unified-interface"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b21548b757dc36d837b903248e92013a31597eadcb16f8933af21ee86c9c4dc1",
"md5": "ab5b37bc0863ddfc6cf3247f9512749d",
"sha256": "185527e3460016f3e1b68e193ff7e713bd3040f93dfc97f2f7ea3197693a04db"
},
"downloads": -1,
"filename": "unified_document_analysis-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ab5b37bc0863ddfc6cf3247f9512749d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 16691,
"upload_time": "2025-10-28T10:15:14",
"upload_time_iso_8601": "2025-10-28T10:15:14.311537Z",
"url": "https://files.pythonhosted.org/packages/b2/15/48b757dc36d837b903248e92013a31597eadcb16f8933af21ee86c9c4dc1/unified_document_analysis-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e1f6eba1b59c407e7cd5b503051099e8e3ccd6a37a71bf17d6e9f8ecd5fbdcaa",
"md5": "2dd7a73eecddb3cef472f41535d48110",
"sha256": "4861acf0f0793f035b01b7256d45e5c134a5848af3eea31b14ef1780763d1925"
},
"downloads": -1,
"filename": "unified_document_analysis-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "2dd7a73eecddb3cef472f41535d48110",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 21531,
"upload_time": "2025-10-28T10:15:16",
"upload_time_iso_8601": "2025-10-28T10:15:16.335679Z",
"url": "https://files.pythonhosted.org/packages/e1/f6/eba1b59c407e7cd5b503051099e8e3ccd6a37a71bf17d6e9f8ecd5fbdcaa/unified_document_analysis-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-28 10:15:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "rdwj",
"github_project": "unified-document-analysis",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "unified-document-analysis"
}