textfromimage


Nametextfromimage JSON
Version 1.1.1 PyPI version JSON
download
home_pagehttps://github.com/OrenGrinker/textfromimage
SummaryGet descriptions of images from OpenAI, Azure OpenAI, and Anthropic Claude models with support for local files and batch processing.
upload_time2024-11-26 18:41:42
maintainerNone
docs_urlNone
authorOren Grinker
requires_python>=3.9
licenseNone
keywords openai gpt-4 claude azure-openai computer-vision image-to-text ai machine-learning batch-processing local-files image-analysis vision-ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TextFromImage

![Python Version](https://img.shields.io/pypi/pyversions/textfromimage)
![PyPI Version](https://img.shields.io/pypi/v/textfromimage)
![License](https://img.shields.io/pypi/l/textfromimage)
![Downloads](https://img.shields.io/pypi/dm/textfromimage)

A powerful Python library for obtaining detailed descriptions of images using various AI models including OpenAI's GPT models, Azure OpenAI, and Anthropic Claude. Perfect for applications requiring image understanding, accessibility features, and content analysis. Supports both local files and URLs, with batch processing capabilities.

## 🌟 Key Features

- 🤖 **Multiple AI Providers**: Support for OpenAI, Azure OpenAI, and Anthropic Claude
- 🌐 **Flexible Input**: Support for both URLs and local file paths
- 📦 **Batch Processing**: Process multiple images (up to 20) concurrently
- 🔄 **Flexible Integration**: Easy-to-use API with multiple initialization options
- 🎯 **Custom Prompting**: Configurable prompts for targeted descriptions
- 🔑 **Secure Authentication**: Multiple authentication methods including environment variables
- 🛠️ **Model Selection**: Support for different model versions and configurations
- 📝 **Type Hints**: Full typing support for better development experience

## 📦 Installation

```bash
pip install textfromimage

# With Azure support
pip install textfromimage[azure]

# With all optional dependencies
pip install textfromimage[all]
```

## 🚀 Quick Start

```python
import textfromimage

# Initialize with API key
textfromimage.openai.init(api_key="your-openai-api-key")

# Process single image (URL or local file)
image_url = 'https://example.com/image.jpg'
local_image = '/path/to/local/image.jpg'

# Get description from URL
url_description = textfromimage.openai.get_description(image_path=image_url)

# Get description from local file
local_description = textfromimage.openai.get_description(image_path=local_image)

# Batch processing
image_paths = [
    'https://example.com/image1.jpg',
    '/path/to/local/image2.jpg',
    'https://example.com/image3.jpg'
]

batch_results = textfromimage.openai.get_description_batch(
    image_paths=image_paths,
    concurrent_limit=3  # Process 3 images at a time
)

# Process results
for result in batch_results:
    if result.success:
        print(f"Success for {result.image_path}: {result.description}")
    else:
        print(f"Failed for {result.image_path}: {result.error}")
```

## 💡 Advanced Usage

### 🤖 Multiple Provider Support

```python
# Anthropic Claude Integration
textfromimage.claude.init(api_key="your-anthropic-api-key")

# Single image
claude_description = textfromimage.claude.get_description(
    image_path=image_path,
    model="claude-3-sonnet-20240229"
)

# Batch processing
claude_results = textfromimage.claude.get_description_batch(
    image_paths=image_paths,
    model="claude-3-sonnet-20240229",
    concurrent_limit=3
)

# Azure OpenAI Integration
textfromimage.azure_openai.init(
    api_key="your-azure-openai-api-key",
    api_base="https://your-azure-endpoint.openai.azure.com/",
    deployment_name="your-deployment-name"
)

# Single image with system prompt
azure_description = textfromimage.azure_openai.get_description(
    image_path=image_path,
    system_prompt="Analyze this image in detail"
)

# Batch processing
azure_results = textfromimage.azure_openai.get_description_batch(
    image_paths=image_paths,
    system_prompt="Analyze each image in detail",
    concurrent_limit=3
)
```

### 🔧 Configuration Options

```python
# Environment Variable Configuration
import os
os.environ['OPENAI_API_KEY'] = 'your-openai-api-key'
os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'
os.environ['AZURE_OPENAI_API_KEY'] = 'your-azure-openai-api-key'
os.environ['AZURE_OPENAI_ENDPOINT'] = 'your-azure-endpoint'
os.environ['AZURE_OPENAI_DEPLOYMENT'] = 'your-deployment-name'

# Custom options for batch processing
batch_results = textfromimage.openai.get_description_batch(
    image_paths=image_paths,
    model='gpt-4-vision-preview',
    prompt="Describe the main elements of each image",
    max_tokens=300,
    concurrent_limit=5
)
```

## 📋 Parameters and Types

```python
# Single image processing parameters
def get_description(
    image_path: str,
    prompt: str = "What's in this image?",
    max_tokens: int = 300,
    model: str = "gpt-4-vision-preview"
) -> str: ...

# Batch processing result type
@dataclass
class BatchResult:
    success: bool
    description: Optional[str]
    error: Optional[str]
    image_path: str

# Batch processing parameters
def get_description_batch(
    image_paths: List[str],
    prompt: str = "What's in this image?",
    max_tokens: int = 300,
    model: str = "gpt-4-vision-preview",
    concurrent_limit: int = 3
) -> List[BatchResult]: ...
```

## 🔍 Error Handling

```python
from textfromimage.utils import BatchResult

# Single image processing
try:
    description = textfromimage.openai.get_description(image_path=image_path)
except ValueError as e:
    print(f"Image processing error: {e}")
except RuntimeError as e:
    print(f"API error: {e}")

# Batch processing error handling
results = textfromimage.openai.get_description_batch(image_paths)
successful = [r for r in results if r.success]
failed = [r for r in results if not r.success]

for result in failed:
    print(f"Failed to process {result.image_path}: {result.error}")
```

## 🤝 Contributing

We welcome contributions! Here's how you can help:

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

## 📝 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/OrenGrinker/textfromimage",
    "name": "textfromimage",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "openai, gpt-4, claude, azure-openai, computer-vision, image-to-text, ai, machine-learning, batch-processing, local-files, image-analysis, vision-ai",
    "author": "Oren Grinker",
    "author_email": "orengr4@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/56/85/70ce17d1f7b60a82ab2625af9ba2878027bddec835bb2ebf8791482c8a17/textfromimage-1.1.1.tar.gz",
    "platform": null,
    "description": "# TextFromImage\n\n![Python Version](https://img.shields.io/pypi/pyversions/textfromimage)\n![PyPI Version](https://img.shields.io/pypi/v/textfromimage)\n![License](https://img.shields.io/pypi/l/textfromimage)\n![Downloads](https://img.shields.io/pypi/dm/textfromimage)\n\nA powerful Python library for obtaining detailed descriptions of images using various AI models including OpenAI's GPT models, Azure OpenAI, and Anthropic Claude. Perfect for applications requiring image understanding, accessibility features, and content analysis. Supports both local files and URLs, with batch processing capabilities.\n\n## \ud83c\udf1f Key Features\n\n- \ud83e\udd16 **Multiple AI Providers**: Support for OpenAI, Azure OpenAI, and Anthropic Claude\n- \ud83c\udf10 **Flexible Input**: Support for both URLs and local file paths\n- \ud83d\udce6 **Batch Processing**: Process multiple images (up to 20) concurrently\n- \ud83d\udd04 **Flexible Integration**: Easy-to-use API with multiple initialization options\n- \ud83c\udfaf **Custom Prompting**: Configurable prompts for targeted descriptions\n- \ud83d\udd11 **Secure Authentication**: Multiple authentication methods including environment variables\n- \ud83d\udee0\ufe0f **Model Selection**: Support for different model versions and configurations\n- \ud83d\udcdd **Type Hints**: Full typing support for better development experience\n\n## \ud83d\udce6 Installation\n\n```bash\npip install textfromimage\n\n# With Azure support\npip install textfromimage[azure]\n\n# With all optional dependencies\npip install textfromimage[all]\n```\n\n## \ud83d\ude80 Quick Start\n\n```python\nimport textfromimage\n\n# Initialize with API key\ntextfromimage.openai.init(api_key=\"your-openai-api-key\")\n\n# Process single image (URL or local file)\nimage_url = 'https://example.com/image.jpg'\nlocal_image = '/path/to/local/image.jpg'\n\n# Get description from URL\nurl_description = textfromimage.openai.get_description(image_path=image_url)\n\n# Get description from local file\nlocal_description = textfromimage.openai.get_description(image_path=local_image)\n\n# Batch processing\nimage_paths = [\n    'https://example.com/image1.jpg',\n    '/path/to/local/image2.jpg',\n    'https://example.com/image3.jpg'\n]\n\nbatch_results = textfromimage.openai.get_description_batch(\n    image_paths=image_paths,\n    concurrent_limit=3  # Process 3 images at a time\n)\n\n# Process results\nfor result in batch_results:\n    if result.success:\n        print(f\"Success for {result.image_path}: {result.description}\")\n    else:\n        print(f\"Failed for {result.image_path}: {result.error}\")\n```\n\n## \ud83d\udca1 Advanced Usage\n\n### \ud83e\udd16 Multiple Provider Support\n\n```python\n# Anthropic Claude Integration\ntextfromimage.claude.init(api_key=\"your-anthropic-api-key\")\n\n# Single image\nclaude_description = textfromimage.claude.get_description(\n    image_path=image_path,\n    model=\"claude-3-sonnet-20240229\"\n)\n\n# Batch processing\nclaude_results = textfromimage.claude.get_description_batch(\n    image_paths=image_paths,\n    model=\"claude-3-sonnet-20240229\",\n    concurrent_limit=3\n)\n\n# Azure OpenAI Integration\ntextfromimage.azure_openai.init(\n    api_key=\"your-azure-openai-api-key\",\n    api_base=\"https://your-azure-endpoint.openai.azure.com/\",\n    deployment_name=\"your-deployment-name\"\n)\n\n# Single image with system prompt\nazure_description = textfromimage.azure_openai.get_description(\n    image_path=image_path,\n    system_prompt=\"Analyze this image in detail\"\n)\n\n# Batch processing\nazure_results = textfromimage.azure_openai.get_description_batch(\n    image_paths=image_paths,\n    system_prompt=\"Analyze each image in detail\",\n    concurrent_limit=3\n)\n```\n\n### \ud83d\udd27 Configuration Options\n\n```python\n# Environment Variable Configuration\nimport os\nos.environ['OPENAI_API_KEY'] = 'your-openai-api-key'\nos.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'\nos.environ['AZURE_OPENAI_API_KEY'] = 'your-azure-openai-api-key'\nos.environ['AZURE_OPENAI_ENDPOINT'] = 'your-azure-endpoint'\nos.environ['AZURE_OPENAI_DEPLOYMENT'] = 'your-deployment-name'\n\n# Custom options for batch processing\nbatch_results = textfromimage.openai.get_description_batch(\n    image_paths=image_paths,\n    model='gpt-4-vision-preview',\n    prompt=\"Describe the main elements of each image\",\n    max_tokens=300,\n    concurrent_limit=5\n)\n```\n\n## \ud83d\udccb Parameters and Types\n\n```python\n# Single image processing parameters\ndef get_description(\n    image_path: str,\n    prompt: str = \"What's in this image?\",\n    max_tokens: int = 300,\n    model: str = \"gpt-4-vision-preview\"\n) -> str: ...\n\n# Batch processing result type\n@dataclass\nclass BatchResult:\n    success: bool\n    description: Optional[str]\n    error: Optional[str]\n    image_path: str\n\n# Batch processing parameters\ndef get_description_batch(\n    image_paths: List[str],\n    prompt: str = \"What's in this image?\",\n    max_tokens: int = 300,\n    model: str = \"gpt-4-vision-preview\",\n    concurrent_limit: int = 3\n) -> List[BatchResult]: ...\n```\n\n## \ud83d\udd0d Error Handling\n\n```python\nfrom textfromimage.utils import BatchResult\n\n# Single image processing\ntry:\n    description = textfromimage.openai.get_description(image_path=image_path)\nexcept ValueError as e:\n    print(f\"Image processing error: {e}\")\nexcept RuntimeError as e:\n    print(f\"API error: {e}\")\n\n# Batch processing error handling\nresults = textfromimage.openai.get_description_batch(image_paths)\nsuccessful = [r for r in results if r.success]\nfailed = [r for r in results if not r.success]\n\nfor result in failed:\n    print(f\"Failed to process {result.image_path}: {result.error}\")\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Here's how you can help:\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/AmazingFeature`)\n3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)\n4. Push to the branch (`git push origin feature/AmazingFeature`)\n5. Open a Pull Request\n\n## \ud83d\udcdd License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Get descriptions of images from OpenAI, Azure OpenAI, and Anthropic Claude models with support for local files and batch processing.",
    "version": "1.1.1",
    "project_urls": {
        "Bug Reports": "https://github.com/OrenGrinker/textfromimage/issues",
        "Homepage": "https://github.com/OrenGrinker/textfromimage",
        "Source": "https://github.com/OrenGrinker/textfromimage"
    },
    "split_keywords": [
        "openai",
        " gpt-4",
        " claude",
        " azure-openai",
        " computer-vision",
        " image-to-text",
        " ai",
        " machine-learning",
        " batch-processing",
        " local-files",
        " image-analysis",
        " vision-ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e345e77b2804d12c367087fd50c9509d25a41cf27192f65485a93701eaf4f786",
                "md5": "2f018475b41314c85c4bda509246a3bd",
                "sha256": "ecf3a7fa4cefe2b51336fa126f9af6ff80cc7a2c4812036617318e1ce1335770"
            },
            "downloads": -1,
            "filename": "textfromimage-1.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2f018475b41314c85c4bda509246a3bd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 9450,
            "upload_time": "2024-11-26T18:41:40",
            "upload_time_iso_8601": "2024-11-26T18:41:40.265187Z",
            "url": "https://files.pythonhosted.org/packages/e3/45/e77b2804d12c367087fd50c9509d25a41cf27192f65485a93701eaf4f786/textfromimage-1.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "568570ce17d1f7b60a82ab2625af9ba2878027bddec835bb2ebf8791482c8a17",
                "md5": "d0ef437d37966be5efe35b8a0d76d7b4",
                "sha256": "2bf355c52a768d3ee6c52bb05cda9175197d6d008daeb2c05d81f4c95fc3f257"
            },
            "downloads": -1,
            "filename": "textfromimage-1.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d0ef437d37966be5efe35b8a0d76d7b4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 10707,
            "upload_time": "2024-11-26T18:41:42",
            "upload_time_iso_8601": "2024-11-26T18:41:42.396474Z",
            "url": "https://files.pythonhosted.org/packages/56/85/70ce17d1f7b60a82ab2625af9ba2878027bddec835bb2ebf8791482c8a17/textfromimage-1.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-26 18:41:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "OrenGrinker",
    "github_project": "textfromimage",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "textfromimage"
}
        
Elapsed time: 0.79465s