# PyVisionAI
# Content Extractor and Image Description with Vision LLM
Extract and describe content from documents using Vision Language Models.
## Repository
https://github.com/MDGrey33/pyvisionai
## Requirements
- Python 3.8 or higher
- Operating system: Windows, macOS, or Linux
- Disk space: At least 1GB free space (more if using local Llama model)
## Features
- Extract text and images from PDF, DOCX, PPTX, and HTML files
- Capture interactive HTML pages as images with full rendering
- Describe images using:
- Cloud-based models (OpenAI GPT-4 Vision, Anthropic Claude Vision)
- Local models (Ollama's Llama Vision)
- Save extracted text and image descriptions in markdown format
- Support for both CLI and library usage
- Multiple extraction methods for different use cases
- Detailed logging with timestamps for all operations
- Customizable image description prompts
## Installation
For macOS users, you can install using Homebrew:
```bash
brew tap mdgrey33/pyvisionai
brew install pyvisionai
```
For more details and configuration options, see the [Homebrew tap repository](https://github.com/roland/homebrew-pyvisionai).
1. **Install System Dependencies**
```bash
# macOS (using Homebrew)
brew install --cask libreoffice # Required for DOCX/PPTX processing
brew install poppler # Required for PDF processing
pip install playwright # Required for HTML processing
playwright install # Install browser dependencies
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y libreoffice # Required for DOCX/PPTX processing
sudo apt-get install -y poppler-utils # Required for PDF processing
pip install playwright # Required for HTML processing
playwright install # Install browser dependencies
# Windows
# Download and install:
# - LibreOffice: https://www.libreoffice.org/download/download/
# - Poppler: http://blog.alivate.com.au/poppler-windows/
# Add poppler's bin directory to your system PATH
pip install playwright
playwright install
```
2. **Install PyVisionAI**
```bash
# Using pip
pip install pyvisionai
# Using poetry (will automatically install playwright as a dependency)
poetry add pyvisionai
poetry run playwright install # Install browser dependencies
```
## Directory Structure
By default, PyVisionAI uses the following directory structure:
```
content/
├── source/ # Default input directory for files to process
├── extracted/ # Default output directory for processed files
└── log/ # Directory for log files and benchmarks
```
These directories are created automatically when needed, but you can:
1. Create them manually:
```bash
mkdir -p content/source content/extracted content/log
```
2. Override them with custom paths:
```bash
# Specify custom input and output directories
file-extract -t pdf -s /path/to/inputs -o /path/to/outputs
# Process a single file with custom output
file-extract -t pdf -s ~/documents/file.pdf -o ~/results
```
Note: While the default directories provide a organized structure, you're free to use any directory layout that suits your needs by specifying custom paths with the `-s` (source) and `-o` (output) options.
## Setup for Image Description
For cloud image description (default, recommended):
```bash
# Set OpenAI API key (for GPT-4 Vision)
export OPENAI_API_KEY='your-openai-key'
# Or set Anthropic API key (for Claude Vision)
export ANTHROPIC_API_KEY='your-anthropic-key'
```
For local image description (optional):
```bash
# Install Ollama
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Windows
# Download from https://ollama.com/download/windows
# Start Ollama server
ollama serve
# Pull the required model
ollama pull llama3.2-vision
# Verify installation
ollama list # Should show llama3.2-vision
curl http://localhost:11434/api/tags # Should return JSON response
```
Note: The local Llama model:
- Runs entirely on your machine
- No API key required
- Requires about 8GB of disk space
- Needs 16GB+ RAM for optimal performance
- May be slower than cloud models but offers privacy
## Features
- Extract text and images from PDF, DOCX, PPTX, and HTML files
- Capture interactive HTML pages as images with full rendering
- Describe images using:
- Cloud-based models (OpenAI GPT-4 Vision, Anthropic Claude Vision)
- Local models (Ollama's Llama Vision)
- Save extracted text and image descriptions in markdown format
- Support for both CLI and library usage
- Multiple extraction methods for different use cases
- Detailed logging with timestamps for all operations
## Usage
### Command Line Interface
1. **Extract Content from Files**
```bash
# Process a single file (using default page-as-image method)
file-extract -t pdf -s path/to/file.pdf -o output_dir
file-extract -t docx -s path/to/file.docx -o output_dir
file-extract -t pptx -s path/to/file.pptx -o output_dir
file-extract -t html -s path/to/file.html -o output_dir
# Process with specific model
file-extract -t pdf -s input.pdf -o output_dir -m claude
file-extract -t pdf -s input.pdf -o output_dir -m gpt4
file-extract -t pdf -s input.pdf -o output_dir -m llama
# Process with specific extractor
file-extract -t pdf -s input.pdf -o output_dir -e text_and_images
# Process all files in a directory
file-extract -t pdf -s input_dir -o output_dir
# Example with custom prompt
file-extract -t pdf -s document.pdf -o output_dir -p "Extract the exact text as present in the image and write one sentence about each visual in the image"
```
**Note:** The custom prompt for file extraction will affect the content of the output document. In case of page_as_image It should contain instructions to extract text and describe visuals. Variations are acceptable as long as they encompass these tasks. Avoid prompts like "What's the color of this picture?" as they may not yield the desired results.
2. **Describe Images**
```bash
# Using GPT-4 Vision (default)
describe-image -i path/to/image.jpg
# Using Claude Vision
describe-image -i path/to/image.jpg -u claude -k your-anthropic-key
# Using local Llama model
describe-image -i path/to/image.jpg -u llama
# Using custom prompt
describe-image -i image.jpg -p "List the main colors in this image"
# Additional options
describe-image -i image.jpg -v # Verbose output
```
### Library Usage
```python
from pyvisionai import (
create_extractor,
describe_image_openai,
describe_image_claude,
describe_image_ollama
)
# 1. Extract content from files
# Using GPT-4 Vision (default)
extractor = create_extractor("pdf")
output_path = extractor.extract("input.pdf", "output_dir")
# Using Claude Vision
extractor = create_extractor("pdf", model="claude")
output_path = extractor.extract("input.pdf", "output_dir")
# Using specific extraction method
extractor = create_extractor("pdf", extractor_type="text_and_images")
output_path = extractor.extract("input.pdf", "output_dir")
# 2. Describe images
# Using GPT-4 Vision
description = describe_image_openai(
"image.jpg",
model="gpt-4o-mini", # default
api_key="your-openai-key", # optional if set in environment
max_tokens=300, # default
prompt="Describe this image focusing on colors and textures" # optional
)
# Using Claude Vision
description = describe_image_claude(
"image.jpg",
api_key="your-anthropic-key", # optional if set in environment
prompt="Describe this image focusing on colors and textures" # optional
)
# Using local Llama model
description = describe_image_ollama(
"image.jpg",
model="llama3.2-vision", # default
prompt="List the main objects in this image" # optional
)
```
## Logging
The application maintains detailed logs of all operations:
- By default, logs are stored in `content/log/` with timestamp-based filenames
- Each run creates a new log file: `pyvisionai_YYYYMMDD_HHMMSS.log`
- Logs include:
- Timestamp for each operation
- Processing steps and their status
- Error messages and warnings
- Extraction method used
- Input and output file paths
## Environment Variables
```bash
# Required for OpenAI Vision (if using GPT-4)
export OPENAI_API_KEY='your-openai-key'
# Required for Claude Vision (if using Claude)
export ANTHROPIC_API_KEY='your-anthropic-key'
# Optional: Ollama host (if using local description)
export OLLAMA_HOST='http://localhost:11434'
```
## Performance Optimization
1. **Memory Management**
- Use `text_and_images` method for large documents
- Process files in smaller batches
- Monitor memory usage during batch processing
- Clean up temporary files regularly
2. **Processing Speed**
- Cloud models (GPT-4, Claude) are generally faster than local models
- Use parallel processing for batch operations
- Consider SSD storage for better I/O performance
- Optimize image sizes before processing
3. **API Usage**
- Implement proper rate limiting
- Use appropriate retry mechanisms
- Cache results when possible
- Monitor API quotas and usage
## License
This project is licensed under the [Apache License 2.0](LICENSE).
## Command Parameters
### `file-extract` Command
```bash
file-extract [-h] -t TYPE -s SOURCE -o OUTPUT [-e EXTRACTOR] [-m MODEL] [-k API_KEY] [-v]
Required Arguments:
-t, --type TYPE File type to process (pdf, docx, pptx, html)
-s, --source SOURCE Source file or directory path
-o, --output OUTPUT Output directory path
Optional Arguments:
-h, --help Show help message and exit
-e, --extractor TYPE Extraction method:
- page_as_image: Convert pages to images (default)
- text_and_images: Extract text and images separately
Note: HTML only supports page_as_image
-m, --model MODEL Vision model for image description:
- gpt4: GPT-4 Vision (default)
- claude: Claude Vision
- llama: Local Llama model
-k, --api-key KEY API key (required for GPT-4 and Claude)
-v, --verbose Enable verbose logging
-p, --prompt TEXT Custom prompt for image description
```
### `describe-image` Command
```bash
describe-image [-h] -i IMAGE [-u MODEL] [-k API_KEY] [-t MAX_TOKENS] [-v] [-p PROMPT]
Required Arguments:
-i, --image IMAGE Path to the image file
Optional Arguments:
-h, --help Show help message and exit
-u, --use-case MODEL Model to use:
- gpt4: GPT-4 Vision (default)
- claude: Claude Vision
- llama: Local Llama model
-k, --api-key KEY API key (required for GPT-4 and Claude)
-t, --max-tokens N Maximum tokens in response (GPT-4 only)
-v, --verbose Enable verbose logging
-p, --prompt TEXT Custom prompt for image description
```
## Examples
### File Extraction Examples
```bash
# Basic usage with defaults (page_as_image method, GPT-4 Vision)
file-extract -t pdf -s document.pdf -o output_dir
file-extract -t html -s webpage.html -o output_dir # HTML always uses page_as_image
# Specify extraction method (not applicable for HTML)
file-extract -t docx -s document.docx -o output_dir -e text_and_images
# Use local Llama model for image description
file-extract -t pptx -s slides.pptx -o output_dir -m llama
# Process all PDFs in a directory with verbose logging
file-extract -t pdf -s input_dir -o output_dir -v
# Use custom OpenAI API key
file-extract -t pdf -s document.pdf -o output_dir -k "your-api-key"
# Use custom prompt for image descriptions
file-extract -t pdf -s document.pdf -o output_dir -p "Focus on text content and layout"
```
### Image Description Examples
```bash
# Basic usage with defaults (GPT-4 Vision)
describe-image -i photo.jpg
# Use local Llama model
describe-image -i photo.jpg -m llama
# Use custom prompt
describe-image -i photo.jpg -p "List the main colors and their proportions"
# Customize token limit
describe-image -i photo.jpg -t 500
# Enable verbose logging
describe-image -i photo.jpg -v
# Use custom OpenAI API key
describe-image -i photo.jpg -k "your-api-key"
# Combine options
describe-image -i photo.jpg -m llama -p "Describe the lighting and shadows" -v
```
## Custom Prompts
PyVisionAI supports custom prompts for both file extraction and image description. Custom prompts allow you to control how content is extracted and described.
### Using Custom Prompts
1. **CLI Usage**
```bash
# File extraction with custom prompt
file-extract -t pdf -s document.pdf -o output_dir -p "Extract all text verbatim and describe any diagrams or images in detail"
# Image description with custom prompt
describe-image -i image.jpg -p "List the main colors and describe the layout of elements"
```
2. **Library Usage**
```python
# File extraction with custom prompt
extractor = create_extractor(
"pdf",
extractor_type="page_as_image",
prompt="Extract all text exactly as it appears and provide detailed descriptions of any charts or diagrams"
)
output_path = extractor.extract("input.pdf", "output_dir")
# Image description with custom prompt
description = describe_image_openai(
"image.jpg",
prompt="Focus on spatial relationships between objects and any text content"
)
```
3. **Environment Variable**
```bash
# Set default prompt via environment variable
export FILE_EXTRACTOR_PROMPT="Extract text and describe visual elements with emphasis on layout"
```
### Writing Effective Prompts
1. **For Page-as-Image Method**
- Include instructions for both text extraction and visual description since the entire page is processed as an image
- Example: "Extract the exact text as it appears on the page and describe any images, diagrams, or visual elements in detail"
2. **For Text-and-Images Method**
- Focus only on image description since text is extracted separately
- The model only sees the images, not the text content
- Example: "Describe the visual content, focusing on what the image represents and any visual elements it contains"
3. **For Image Description**
- Be specific about what aspects to focus on
- Example: "Describe the main elements, their arrangement, and any text visible in the image"
Note: For page-as-image method, prompts must include both text extraction and visual description instructions as the entire page is processed as an image. For text-and-images method, prompts should focus solely on image description as text is handled separately.
## Contributing
We welcome contributions to PyVisionAI! Whether you're fixing bugs, improving documentation, or proposing new features, your help is appreciated.
Please read our [Contributing Guidelines](CONTRIBUTING.md) for detailed information on:
- Setting up your development environment
- Code style and standards
- Testing requirements
- Pull request process
- Documentation guidelines
### Quick Start for Contributors
1. Fork and clone the repository
2. Install development dependencies:
```bash
pip install poetry
poetry install
```
3. Install pre-commit hooks:
```bash
poetry run pre-commit install
```
4. Make your changes
5. Run tests:
```bash
poetry run pytest
```
6. Submit a pull request
For more detailed instructions, see [CONTRIBUTING.md](CONTRIBUTING.md).
Raw data
{
"_id": null,
"home_page": "https://github.com/MDGrey33/pyvisionai",
"name": "pyvisionai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.11",
"maintainer_email": null,
"keywords": "pdf, docx, pptx, html, vision, llm, extraction",
"author": "MDGrey33",
"author_email": "roland@abouyounes.com",
"download_url": "https://files.pythonhosted.org/packages/97/fc/d63c112a8214a051d2fc6a7ceb2c4838bdff5269ded52de08d083e7d0fa7/pyvisionai-0.3.0.tar.gz",
"platform": null,
"description": "# PyVisionAI\n# Content Extractor and Image Description with Vision LLM\n\nExtract and describe content from documents using Vision Language Models.\n\n## Repository\n\nhttps://github.com/MDGrey33/pyvisionai\n\n## Requirements\n\n- Python 3.8 or higher\n- Operating system: Windows, macOS, or Linux\n- Disk space: At least 1GB free space (more if using local Llama model)\n\n## Features\n\n- Extract text and images from PDF, DOCX, PPTX, and HTML files\n- Capture interactive HTML pages as images with full rendering\n- Describe images using:\n - Cloud-based models (OpenAI GPT-4 Vision, Anthropic Claude Vision)\n - Local models (Ollama's Llama Vision)\n- Save extracted text and image descriptions in markdown format\n- Support for both CLI and library usage\n- Multiple extraction methods for different use cases\n- Detailed logging with timestamps for all operations\n- Customizable image description prompts\n\n## Installation\n\nFor macOS users, you can install using Homebrew:\n```bash\nbrew tap mdgrey33/pyvisionai\nbrew install pyvisionai\n```\nFor more details and configuration options, see the [Homebrew tap repository](https://github.com/roland/homebrew-pyvisionai).\n\n1. **Install System Dependencies**\n ```bash\n # macOS (using Homebrew)\n brew install --cask libreoffice # Required for DOCX/PPTX processing\n brew install poppler # Required for PDF processing\n pip install playwright # Required for HTML processing\n playwright install # Install browser dependencies\n\n # Ubuntu/Debian\n sudo apt-get update\n sudo apt-get install -y libreoffice # Required for DOCX/PPTX processing\n sudo apt-get install -y poppler-utils # Required for PDF processing\n pip install playwright # Required for HTML processing\n playwright install # Install browser dependencies\n\n # Windows\n # Download and install:\n # - LibreOffice: https://www.libreoffice.org/download/download/\n # - Poppler: http://blog.alivate.com.au/poppler-windows/\n # Add poppler's bin directory to your system PATH\n pip install playwright\n playwright install\n ```\n\n2. **Install PyVisionAI**\n ```bash\n # Using pip\n pip install pyvisionai\n\n # Using poetry (will automatically install playwright as a dependency)\n poetry add pyvisionai\n poetry run playwright install # Install browser dependencies\n ```\n\n## Directory Structure\n\nBy default, PyVisionAI uses the following directory structure:\n```\ncontent/\n\u251c\u2500\u2500 source/ # Default input directory for files to process\n\u251c\u2500\u2500 extracted/ # Default output directory for processed files\n\u2514\u2500\u2500 log/ # Directory for log files and benchmarks\n```\n\nThese directories are created automatically when needed, but you can:\n1. Create them manually:\n ```bash\n mkdir -p content/source content/extracted content/log\n ```\n2. Override them with custom paths:\n ```bash\n # Specify custom input and output directories\n file-extract -t pdf -s /path/to/inputs -o /path/to/outputs\n\n # Process a single file with custom output\n file-extract -t pdf -s ~/documents/file.pdf -o ~/results\n ```\n\nNote: While the default directories provide a organized structure, you're free to use any directory layout that suits your needs by specifying custom paths with the `-s` (source) and `-o` (output) options.\n\n## Setup for Image Description\n\nFor cloud image description (default, recommended):\n```bash\n# Set OpenAI API key (for GPT-4 Vision)\nexport OPENAI_API_KEY='your-openai-key'\n\n# Or set Anthropic API key (for Claude Vision)\nexport ANTHROPIC_API_KEY='your-anthropic-key'\n```\n\nFor local image description (optional):\n```bash\n# Install Ollama\n# macOS\nbrew install ollama\n\n# Linux\ncurl -fsSL https://ollama.com/install.sh | sh\n\n# Windows\n# Download from https://ollama.com/download/windows\n\n# Start Ollama server\nollama serve\n\n# Pull the required model\nollama pull llama3.2-vision\n\n# Verify installation\nollama list # Should show llama3.2-vision\ncurl http://localhost:11434/api/tags # Should return JSON response\n```\n\nNote: The local Llama model:\n- Runs entirely on your machine\n- No API key required\n- Requires about 8GB of disk space\n- Needs 16GB+ RAM for optimal performance\n- May be slower than cloud models but offers privacy\n\n## Features\n\n- Extract text and images from PDF, DOCX, PPTX, and HTML files\n- Capture interactive HTML pages as images with full rendering\n- Describe images using:\n - Cloud-based models (OpenAI GPT-4 Vision, Anthropic Claude Vision)\n - Local models (Ollama's Llama Vision)\n- Save extracted text and image descriptions in markdown format\n- Support for both CLI and library usage\n- Multiple extraction methods for different use cases\n- Detailed logging with timestamps for all operations\n\n## Usage\n\n### Command Line Interface\n\n1. **Extract Content from Files**\n ```bash\n # Process a single file (using default page-as-image method)\n file-extract -t pdf -s path/to/file.pdf -o output_dir\n file-extract -t docx -s path/to/file.docx -o output_dir\n file-extract -t pptx -s path/to/file.pptx -o output_dir\n file-extract -t html -s path/to/file.html -o output_dir\n\n # Process with specific model\n file-extract -t pdf -s input.pdf -o output_dir -m claude\n file-extract -t pdf -s input.pdf -o output_dir -m gpt4\n file-extract -t pdf -s input.pdf -o output_dir -m llama\n\n # Process with specific extractor\n file-extract -t pdf -s input.pdf -o output_dir -e text_and_images\n\n # Process all files in a directory\n file-extract -t pdf -s input_dir -o output_dir\n\n # Example with custom prompt\n file-extract -t pdf -s document.pdf -o output_dir -p \"Extract the exact text as present in the image and write one sentence about each visual in the image\"\n ```\n\n **Note:** The custom prompt for file extraction will affect the content of the output document. In case of page_as_image It should contain instructions to extract text and describe visuals. Variations are acceptable as long as they encompass these tasks. Avoid prompts like \"What's the color of this picture?\" as they may not yield the desired results.\n\n2. **Describe Images**\n ```bash\n # Using GPT-4 Vision (default)\n describe-image -i path/to/image.jpg\n\n # Using Claude Vision\n describe-image -i path/to/image.jpg -u claude -k your-anthropic-key\n\n # Using local Llama model\n describe-image -i path/to/image.jpg -u llama\n\n # Using custom prompt\n describe-image -i image.jpg -p \"List the main colors in this image\"\n\n # Additional options\n describe-image -i image.jpg -v # Verbose output\n ```\n\n### Library Usage\n\n```python\nfrom pyvisionai import (\n create_extractor,\n describe_image_openai,\n describe_image_claude,\n describe_image_ollama\n)\n\n# 1. Extract content from files\n# Using GPT-4 Vision (default)\nextractor = create_extractor(\"pdf\")\noutput_path = extractor.extract(\"input.pdf\", \"output_dir\")\n\n# Using Claude Vision\nextractor = create_extractor(\"pdf\", model=\"claude\")\noutput_path = extractor.extract(\"input.pdf\", \"output_dir\")\n\n# Using specific extraction method\nextractor = create_extractor(\"pdf\", extractor_type=\"text_and_images\")\noutput_path = extractor.extract(\"input.pdf\", \"output_dir\")\n\n# 2. Describe images\n# Using GPT-4 Vision\ndescription = describe_image_openai(\n \"image.jpg\",\n model=\"gpt-4o-mini\", # default\n api_key=\"your-openai-key\", # optional if set in environment\n max_tokens=300, # default\n prompt=\"Describe this image focusing on colors and textures\" # optional\n)\n\n# Using Claude Vision\ndescription = describe_image_claude(\n \"image.jpg\",\n api_key=\"your-anthropic-key\", # optional if set in environment\n prompt=\"Describe this image focusing on colors and textures\" # optional\n)\n\n# Using local Llama model\ndescription = describe_image_ollama(\n \"image.jpg\",\n model=\"llama3.2-vision\", # default\n prompt=\"List the main objects in this image\" # optional\n)\n```\n\n## Logging\n\nThe application maintains detailed logs of all operations:\n- By default, logs are stored in `content/log/` with timestamp-based filenames\n- Each run creates a new log file: `pyvisionai_YYYYMMDD_HHMMSS.log`\n- Logs include:\n - Timestamp for each operation\n - Processing steps and their status\n - Error messages and warnings\n - Extraction method used\n - Input and output file paths\n\n## Environment Variables\n\n```bash\n# Required for OpenAI Vision (if using GPT-4)\nexport OPENAI_API_KEY='your-openai-key'\n\n# Required for Claude Vision (if using Claude)\nexport ANTHROPIC_API_KEY='your-anthropic-key'\n\n# Optional: Ollama host (if using local description)\nexport OLLAMA_HOST='http://localhost:11434'\n```\n\n## Performance Optimization\n\n1. **Memory Management**\n - Use `text_and_images` method for large documents\n - Process files in smaller batches\n - Monitor memory usage during batch processing\n - Clean up temporary files regularly\n\n2. **Processing Speed**\n - Cloud models (GPT-4, Claude) are generally faster than local models\n - Use parallel processing for batch operations\n - Consider SSD storage for better I/O performance\n - Optimize image sizes before processing\n\n3. **API Usage**\n - Implement proper rate limiting\n - Use appropriate retry mechanisms\n - Cache results when possible\n - Monitor API quotas and usage\n\n## License\n\nThis project is licensed under the [Apache License 2.0](LICENSE).\n\n## Command Parameters\n\n### `file-extract` Command\n```bash\nfile-extract [-h] -t TYPE -s SOURCE -o OUTPUT [-e EXTRACTOR] [-m MODEL] [-k API_KEY] [-v]\n\nRequired Arguments:\n -t, --type TYPE File type to process (pdf, docx, pptx, html)\n -s, --source SOURCE Source file or directory path\n -o, --output OUTPUT Output directory path\n\nOptional Arguments:\n -h, --help Show help message and exit\n -e, --extractor TYPE Extraction method:\n - page_as_image: Convert pages to images (default)\n - text_and_images: Extract text and images separately\n Note: HTML only supports page_as_image\n -m, --model MODEL Vision model for image description:\n - gpt4: GPT-4 Vision (default)\n - claude: Claude Vision\n - llama: Local Llama model\n -k, --api-key KEY API key (required for GPT-4 and Claude)\n -v, --verbose Enable verbose logging\n -p, --prompt TEXT Custom prompt for image description\n```\n\n### `describe-image` Command\n```bash\ndescribe-image [-h] -i IMAGE [-u MODEL] [-k API_KEY] [-t MAX_TOKENS] [-v] [-p PROMPT]\n\nRequired Arguments:\n -i, --image IMAGE Path to the image file\n\nOptional Arguments:\n -h, --help Show help message and exit\n -u, --use-case MODEL Model to use:\n - gpt4: GPT-4 Vision (default)\n - claude: Claude Vision\n - llama: Local Llama model\n -k, --api-key KEY API key (required for GPT-4 and Claude)\n -t, --max-tokens N Maximum tokens in response (GPT-4 only)\n -v, --verbose Enable verbose logging\n -p, --prompt TEXT Custom prompt for image description\n```\n\n## Examples\n\n### File Extraction Examples\n```bash\n# Basic usage with defaults (page_as_image method, GPT-4 Vision)\nfile-extract -t pdf -s document.pdf -o output_dir\nfile-extract -t html -s webpage.html -o output_dir # HTML always uses page_as_image\n\n# Specify extraction method (not applicable for HTML)\nfile-extract -t docx -s document.docx -o output_dir -e text_and_images\n\n# Use local Llama model for image description\nfile-extract -t pptx -s slides.pptx -o output_dir -m llama\n\n# Process all PDFs in a directory with verbose logging\nfile-extract -t pdf -s input_dir -o output_dir -v\n\n# Use custom OpenAI API key\nfile-extract -t pdf -s document.pdf -o output_dir -k \"your-api-key\"\n\n# Use custom prompt for image descriptions\nfile-extract -t pdf -s document.pdf -o output_dir -p \"Focus on text content and layout\"\n```\n\n### Image Description Examples\n```bash\n# Basic usage with defaults (GPT-4 Vision)\ndescribe-image -i photo.jpg\n\n# Use local Llama model\ndescribe-image -i photo.jpg -m llama\n\n# Use custom prompt\ndescribe-image -i photo.jpg -p \"List the main colors and their proportions\"\n\n# Customize token limit\ndescribe-image -i photo.jpg -t 500\n\n# Enable verbose logging\ndescribe-image -i photo.jpg -v\n\n# Use custom OpenAI API key\ndescribe-image -i photo.jpg -k \"your-api-key\"\n\n# Combine options\ndescribe-image -i photo.jpg -m llama -p \"Describe the lighting and shadows\" -v\n```\n\n## Custom Prompts\n\nPyVisionAI supports custom prompts for both file extraction and image description. Custom prompts allow you to control how content is extracted and described.\n\n### Using Custom Prompts\n\n1. **CLI Usage**\n ```bash\n # File extraction with custom prompt\n file-extract -t pdf -s document.pdf -o output_dir -p \"Extract all text verbatim and describe any diagrams or images in detail\"\n\n # Image description with custom prompt\n describe-image -i image.jpg -p \"List the main colors and describe the layout of elements\"\n ```\n\n2. **Library Usage**\n ```python\n # File extraction with custom prompt\n extractor = create_extractor(\n \"pdf\",\n extractor_type=\"page_as_image\",\n prompt=\"Extract all text exactly as it appears and provide detailed descriptions of any charts or diagrams\"\n )\n output_path = extractor.extract(\"input.pdf\", \"output_dir\")\n\n # Image description with custom prompt\n description = describe_image_openai(\n \"image.jpg\",\n prompt=\"Focus on spatial relationships between objects and any text content\"\n )\n ```\n\n3. **Environment Variable**\n ```bash\n # Set default prompt via environment variable\n export FILE_EXTRACTOR_PROMPT=\"Extract text and describe visual elements with emphasis on layout\"\n ```\n\n### Writing Effective Prompts\n\n1. **For Page-as-Image Method**\n - Include instructions for both text extraction and visual description since the entire page is processed as an image\n - Example: \"Extract the exact text as it appears on the page and describe any images, diagrams, or visual elements in detail\"\n\n2. **For Text-and-Images Method**\n - Focus only on image description since text is extracted separately\n - The model only sees the images, not the text content\n - Example: \"Describe the visual content, focusing on what the image represents and any visual elements it contains\"\n\n3. **For Image Description**\n - Be specific about what aspects to focus on\n - Example: \"Describe the main elements, their arrangement, and any text visible in the image\"\n\nNote: For page-as-image method, prompts must include both text extraction and visual description instructions as the entire page is processed as an image. For text-and-images method, prompts should focus solely on image description as text is handled separately.\n\n## Contributing\n\nWe welcome contributions to PyVisionAI! Whether you're fixing bugs, improving documentation, or proposing new features, your help is appreciated.\n\nPlease read our [Contributing Guidelines](CONTRIBUTING.md) for detailed information on:\n- Setting up your development environment\n- Code style and standards\n- Testing requirements\n- Pull request process\n- Documentation guidelines\n\n### Quick Start for Contributors\n\n1. Fork and clone the repository\n2. Install development dependencies:\n ```bash\n pip install poetry\n poetry install\n ```\n3. Install pre-commit hooks:\n ```bash\n poetry run pre-commit install\n ```\n4. Make your changes\n5. Run tests:\n ```bash\n poetry run pytest\n ```\n6. Submit a pull request\n\nFor more detailed instructions, see [CONTRIBUTING.md](CONTRIBUTING.md).\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A Python library for extracting and describing content from documents using Vision LLMs",
"version": "0.3.0",
"project_urls": {
"Homepage": "https://github.com/MDGrey33/pyvisionai",
"Repository": "https://github.com/MDGrey33/pyvisionai"
},
"split_keywords": [
"pdf",
" docx",
" pptx",
" html",
" vision",
" llm",
" extraction"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2711885179cd44b612711ebf5b0db2afac29d3ef76ab5c23d1b012a3067add9d",
"md5": "38574608cf489625955b5f34aa0eb694",
"sha256": "4c4655115b5007fda6e25883d3a1d31b488cb260c5be7321f45c2dce82e2ceda"
},
"downloads": -1,
"filename": "pyvisionai-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "38574608cf489625955b5f34aa0eb694",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.11",
"size": 46223,
"upload_time": "2025-02-18T10:47:09",
"upload_time_iso_8601": "2025-02-18T10:47:09.446415Z",
"url": "https://files.pythonhosted.org/packages/27/11/885179cd44b612711ebf5b0db2afac29d3ef76ab5c23d1b012a3067add9d/pyvisionai-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "97fcd63c112a8214a051d2fc6a7ceb2c4838bdff5269ded52de08d083e7d0fa7",
"md5": "883b13207e6670ad77e09484c1978bd5",
"sha256": "46fb05dee4b85a9a01125e17600c956e5ac55ef213adf03fdb8ca2b9c03b4285"
},
"downloads": -1,
"filename": "pyvisionai-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "883b13207e6670ad77e09484c1978bd5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.11",
"size": 34519,
"upload_time": "2025-02-18T10:47:11",
"upload_time_iso_8601": "2025-02-18T10:47:11.031339Z",
"url": "https://files.pythonhosted.org/packages/97/fc/d63c112a8214a051d2fc6a7ceb2c4838bdff5269ded52de08d083e7d0fa7/pyvisionai-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-18 10:47:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "MDGrey33",
"github_project": "pyvisionai",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pyvisionai"
}