web-maestro


Nameweb-maestro JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryProduction-ready web content extraction with multi-provider LLM support and intelligent browser automation
upload_time2025-07-15 01:39:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords ai anthropic async automation browser-automation claude content-extraction data-extraction gpt llm multi-provider openai playwright streaming web-crawler web-scraping
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ๐ŸŒ Web Maestro

[![PyPI version](https://badge.fury.io/py/web-maestro.svg)](https://badge.fury.io/py/web-maestro)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Downloads](https://pepy.tech/badge/web-maestro)](https://pepy.tech/project/web-maestro)

**Production-ready web content extraction with multi-provider LLM support and intelligent browser automation.**

Web Maestro is a Python library that combines advanced web scraping capabilities with AI-powered content analysis. It provides browser automation using Playwright and integrates with multiple LLM providers for intelligent content extraction and analysis.

## ๐Ÿ”ฅ Real-World Example: Smart Baseball Data Pipeline

Imagine you need to build a comprehensive baseball analytics system that monitors multiple sports websites, extracts game statistics, player performance data, and news updates in real-time. Web Maestro makes this incredibly simple:

```python
import asyncio
from web_maestro import WebMaestro, LLMConfig

async def smart_baseball_crawler():
    # Configure your AI-powered crawler
    config = LLMConfig(
        provider="openai",  # or anthropic, portkey, ollama
        api_key="your-api-key",
        model="gpt-4o"
    )

    maestro = WebMaestro(config)

    # Define what you want to extract
    extraction_prompt = """
    Extract baseball data and structure it as JSON:
    - Game scores and schedules
    - Player statistics (batting avg, ERA, etc.)
    - Injury reports and roster changes
    - Latest news headlines

    Focus on actionable data for fantasy baseball decisions.
    """

    # Crawl multiple sources intelligently
    sources = [
        "https://www.espn.com/mlb/",
        "https://www.mlb.com/",
        "https://www.baseball-reference.com/"
    ]

    for url in sources:
        # AI automatically understands site structure and extracts relevant data
        result = await maestro.extract_structured_data(
            url=url,
            prompt=extraction_prompt,
            output_format="json"
        )

        if result.success:
            print(f"๐Ÿ“Š Extracted from {url}:")
            print(f"โšพ Games: {len(result.data.get('games', []))}")
            print(f"๐Ÿ‘ค Players: {len(result.data.get('players', []))}")
            print(f"๐Ÿ“ฐ News: {len(result.data.get('news', []))}")

            # Data is automatically structured and ready for your database
            await save_to_database(result.data)

# Run your intelligent baseball pipeline
asyncio.run(smart_baseball_crawler())
```

**Why This Example Matters:**
- ๐Ÿง  **AI-Powered**: No manual CSS selectors or HTML parsing - AI understands content contextually
- ๐Ÿš€ **Production Ready**: Handles dynamic content, JavaScript-heavy sites, and rate limiting automatically
- ๐Ÿ”„ **Adaptive**: Works across different sports sites without code changes
- ๐Ÿ“Š **Structured Output**: Returns clean, structured data ready for analysis or storage

## ๐ŸŒŸ Key Features

### ๐Ÿš€ **Advanced Web Extraction**
- **Browser Automation**: Powered by Playwright for handling dynamic content and JavaScript-heavy sites
- **DOM Capture**: Intelligent element interaction including clicks, hovers, and content discovery
- **Session Management**: Proper context management for complex extraction workflows

### ๐Ÿค– **Multi-Provider LLM Support**
- **Universal Interface**: Works with OpenAI, Anthropic Claude, Portkey, and Ollama
- **Streaming Support**: Real-time content delivery for better user experience
- **Intelligent Analysis**: AI-powered content extraction and structuring

### ๐Ÿ”ง **Developer Experience**
- **Clean API**: Intuitive, well-documented interface
- **Type Safety**: Full type hints and Pydantic models
- **Async Support**: Built for modern async/await patterns
- **Extensible**: Modular architecture for custom providers

## ๐Ÿ“ฆ Installation

### Basic Installation
```bash
pip install web-maestro
```

### Quick Verification
After installation, verify everything works:

```python
# Test basic import
from web_maestro import LLMConfig, SessionContext
print("โœ… Web Maestro installed successfully!")

# Check available providers
from web_maestro.providers.factory import ProviderRegistry
print(f"๐Ÿ“ฆ Available providers: {ProviderRegistry.list_providers()}")
```

### With Specific LLM Provider
Choose your preferred AI provider:

```bash
# For OpenAI GPT models
pip install "web-maestro[openai]"

# For Anthropic Claude models
pip install "web-maestro[anthropic]"

# For Portkey AI gateway
pip install "web-maestro[portkey]"

# For local Ollama models
pip install "web-maestro[ollama]"

# Install all providers
pip install "web-maestro[all-providers]"
```

### System Dependencies

Web Maestro requires **Poppler** for PDF processing functionality:

**macOS (Homebrew):**
```bash
brew install poppler
```

**Ubuntu/Debian:**
```bash
sudo apt-get install poppler-utils
```

**Windows:**
Download from: https://blog.alivate.com.au/poppler-windows/

### Development Installation

**Quick Setup (Recommended):**
```bash
git clone https://github.com/fede-dash/web-maestro.git
cd web-maestro

# Automated setup - installs system deps, Python deps, and browsers
hatch run setup-dev
```

**Manual Setup:**
```bash
git clone https://github.com/fede-dash/web-maestro.git
cd web-maestro

# Install system dependencies
brew install poppler  # macOS
# sudo apt-get install poppler-utils  # Linux

# Install Python dependencies
pip install -e ".[dev,all-features]"

# Install browsers for Playwright
playwright install

# Setup pre-commit hooks
pre-commit install
```

**Available Hatch Scripts:**
```bash
# Full system and dev setup
hatch run setup-dev

# Install just system dependencies
hatch run setup-system

# Full setup for production use
hatch run setup-full

# Run tests
hatch run test

# Run tests with coverage
hatch run test-cov

# Format and lint code
hatch run format
hatch run lint
```

## ๐Ÿš€ Quick Start

### Basic Web Content Extraction

```python
import asyncio
from web_maestro import fetch_rendered_html, SessionContext
from web_maestro.providers.portkey import PortkeyProvider
from web_maestro import LLMConfig

async def extract_content():
    # Configure your LLM provider
    config = LLMConfig(
        provider="portkey",
        api_key="your-api-key",
        model="gpt-4",
        base_url="your-portkey-endpoint",
        extra_params={"virtual_key": "your-virtual-key"}
    )

    provider = PortkeyProvider(config)

    # Extract content using browser automation
    ctx = SessionContext()
    blocks = await fetch_rendered_html(
        url="https://example.com",
        ctx=ctx
    )

    if blocks:
        # Combine extracted content
        content = "\n".join([block.content for block in blocks[:20]])

        # Analyze with AI
        response = await provider.complete(
            f"Analyze this content and extract key information:\n{content[:5000]}"
        )

        if response.success:
            print("Extracted content:", response.content)
        else:
            print("Error:", response.error)

asyncio.run(extract_content())
```

### Streaming Content Analysis

```python
import asyncio
from web_maestro.providers.portkey import PortkeyProvider
from web_maestro import LLMConfig

async def stream_analysis():
    config = LLMConfig(
        provider="portkey",
        api_key="your-api-key",
        model="gpt-4",
        base_url="your-endpoint",
        extra_params={"virtual_key": "your-virtual-key"}
    )

    provider = PortkeyProvider(config)

    # Stream response chunks in real-time
    prompt = "Write a detailed analysis of modern web scraping techniques."

    async for chunk in provider.complete_stream(prompt):
        print(chunk, end="", flush=True)

asyncio.run(stream_analysis())
```

### Using Enhanced Fetcher

```python
import asyncio
from web_maestro.utils import EnhancedFetcher

async def fetch_with_caching():
    # Create fetcher with intelligent caching
    fetcher = EnhancedFetcher(cache_ttl=300)  # 5-minute cache

    # Attempt static fetch first, fallback to browser if needed
    blocks = await fetcher.try_static_first("https://example.com")

    print(f"Fetched {len(blocks)} content blocks")
    for block in blocks[:5]:
        print(f"[{block.content_type}] {block.content[:100]}...")

asyncio.run(fetch_with_caching())
```

## ๐ŸŽฏ Current Capabilities

### โœ… **What's Working**
- **Browser Automation**: Full Playwright integration for dynamic content
- **Multi-Provider LLM**: OpenAI, Anthropic, Portkey, and Ollama support
- **Streaming**: Real-time response streaming from LLM providers
- **Content Extraction**: DOM capture with multiple content types
- **Session Management**: Proper browser context and session handling
- **Type Safety**: Comprehensive type hints throughout the codebase

### ๐Ÿšง **In Development**
- **WebMaestro Class**: High-level orchestration (basic implementation exists)
- **Advanced DOM Interaction**: Tab expansion, hover detection (framework exists)
- **Rate Limiting**: Smart request throttling (utility classes available)
- **Caching Layer**: Response caching with TTL (basic implementation exists)

### ๐Ÿ“‹ **Planned Features**
- Comprehensive test suite
- Advanced error recovery
- Performance monitoring
- Plugin architecture
- Documentation website

## ๐Ÿ”ฎ **Future Roadmap: WebActions Framework**

> **๐Ÿšง Coming Soon**: Web Maestro is evolving beyond content extraction into intelligent web automation with **WebActions** - a revolutionary framework for automated web interactions.

The next major evolution will introduce **WebActions** - an intelligent automation framework that extends beyond content extraction to include sophisticated web interaction capabilities:

### ๐ŸŽฏ **Planned WebActions Features:**
- **๐Ÿค– Intelligent Form Automation**: AI-driven form completion with context understanding and validation
- **๐Ÿ”„ Complex Workflow Execution**: Multi-step web processes with decision trees and conditional logic
- **๐Ÿ“ฑ Interactive Element Management**: Smart handling of dropdowns, modals, and dynamic UI components
- **๐Ÿ” Authentication Workflows**: Automated login sequences with credential management and session persistence
- **๐Ÿ“Š Data Submission Pipelines**: Intelligent data entry with validation and error handling
- **๐ŸŽฎ Game-like Interactions**: Advanced interaction patterns for complex web applications
- **๐Ÿง  Action Learning**: Machine learning-based action optimization and pattern recognition

### ๐ŸŒŸ **WebActions Vision:**
WebActions will transform Web Maestro from a content extraction tool into a comprehensive web automation agent capable of performing complex interactions while maintaining the same level of intelligence and adaptability demonstrated in current content analysis features. This evolution will enable use cases such as:

- **Automated Data Entry**: Intelligent form completion across multiple systems
- **Complex Multi-Step Workflows**: End-to-end process automation with decision making
- **Intelligent Web Application Testing**: AI-driven testing with adaptive scenarios
- **Dynamic Content Management**: Automated content publishing and management workflows

**Beta Status**: The current version focuses on content extraction and analysis. WebActions capabilities are in active development and will be released in future versions.

## ๐Ÿ”ง Configuration

### LLM Provider Setup

```python
from web_maestro import LLMConfig

# OpenAI Configuration
openai_config = LLMConfig(
    provider="openai",
    api_key="sk-...",
    model="gpt-4",
    temperature=0.7,
    max_tokens=2000
)

# Portkey Configuration (with gateway)
portkey_config = LLMConfig(
    provider="portkey",
    api_key="your-portkey-key",
    model="gpt-4",
    base_url="https://your-gateway.com/v1",
    extra_params={
        "virtual_key": "your-virtual-key"
    }
)

# Anthropic Configuration
anthropic_config = LLMConfig(
    provider="anthropic",
    api_key="sk-ant-...",
    model="claude-3-sonnet",
    temperature=0.5
)
```

### Browser Configuration

```python
browser_config = {
    "headless": True,
    "timeout_ms": 30000,
    "viewport": {"width": 1920, "height": 1080},
    "max_scrolls": 15,
    "max_elements_to_click": 25,
    "stability_timeout_ms": 2000
}

blocks = await fetch_rendered_html(
    url="https://complex-spa.com",
    ctx=ctx,
    config=browser_config
)
```

## ๐Ÿ“š API Overview

### Core Functions

```python
# Browser-based content extraction
from web_maestro import fetch_rendered_html, SessionContext

ctx = SessionContext()
blocks = await fetch_rendered_html(url, ctx, config)
```

### Provider Classes

```python
# All providers implement the same interface
from web_maestro.providers.portkey import PortkeyProvider

provider = PortkeyProvider(config)
response = await provider.complete(prompt)
stream = provider.complete_stream(prompt)
```

### Utility Classes

```python
# Enhanced fetching with caching
from web_maestro.utils import EnhancedFetcher, RateLimiter

fetcher = EnhancedFetcher(cache_ttl=300)
rate_limiter = RateLimiter(max_requests=10, time_window=60)
```

### Data Models

```python
# Structured data types
from web_maestro.models.types import CapturedBlock, CaptureType
from web_maestro.providers.base import LLMResponse, ModelCapability
```

## ๐Ÿ›ก๏ธ Error Handling

```python
try:
    response = await provider.complete("Your prompt")

    if response.success:
        print("Response:", response.content)
        print(f"Tokens used: {response.total_tokens}")
    else:
        print("Error:", response.error)

except Exception as e:
    print(f"Unexpected error: {e}")
```

## ๐Ÿ”„ Streaming Support

```python
# Stream responses for real-time delivery
async for chunk in provider.complete_stream("Your prompt"):
    print(chunk, end="", flush=True)

# Chat streaming
messages = [{"role": "user", "content": "Hello"}]
async for chunk in provider.complete_chat_stream(messages):
    print(chunk, end="", flush=True)
```

## ๐Ÿงช Testing Your Setup

```python
# Test provider connectivity
import asyncio
from web_maestro.providers.openai import OpenAIProvider
from web_maestro import LLMConfig

async def test_setup():
    config = LLMConfig(
        provider="openai",
        api_key="your-openai-api-key",
        model="gpt-3.5-turbo"
    )

    provider = OpenAIProvider(config)
    response = await provider.complete("Hello, world!")

    if response.success:
        print("โœ… Provider working:", response.content)
    else:
        print("โŒ Provider failed:", response.error)

# Run the test
asyncio.run(test_setup())
```

## ๐Ÿ› ๏ธ Troubleshooting

### Common Issues and Solutions

**Import Error: "No module named 'web_maestro'"**
```bash
# Make sure you installed the package
pip install web-maestro

# If using conda
conda install -c conda-forge web-maestro  # Not yet available
```

**Browser Dependencies Missing**
```bash
# Install Playwright browsers
playwright install

# On Linux, you might need additional dependencies
sudo apt-get install libnss3 libxss1 libasound2
```

**PDF Processing Issues**
```bash
# Install Poppler (required for PDF processing)
# macOS
brew install poppler

# Ubuntu/Debian
sudo apt-get install poppler-utils

# Windows: Download from https://blog.alivate.com.au/poppler-windows/
```

**LLM Provider Authentication**
```python
# Verify your API keys are set correctly
import os
print("OpenAI API Key:", os.getenv("OPENAI_API_KEY", "Not set"))
print("Anthropic API Key:", os.getenv("ANTHROPIC_API_KEY", "Not set"))
```

**Rate Limiting Issues**
```python
# Use built-in rate limiting
from web_maestro.utils import RateLimiter

rate_limiter = RateLimiter(max_requests=10, time_window=60)
await rate_limiter.acquire()  # Will wait if needed
```

## ๐Ÿ“ Project Structure

```
web-maestro/
โ”œโ”€โ”€ src/web_maestro/
โ”‚   โ”œโ”€โ”€ __init__.py              # Main exports
โ”‚   โ”œโ”€โ”€ multi_provider.py        # WebMaestro orchestrator
โ”‚   โ”œโ”€โ”€ fetch.py                 # Core fetching logic
โ”‚   โ”œโ”€โ”€ context.py               # Session management
โ”‚   โ”œโ”€โ”€ providers/               # LLM provider implementations
โ”‚   โ”‚   โ”œโ”€โ”€ base.py              # Base provider interface
โ”‚   โ”‚   โ”œโ”€โ”€ portkey.py           # Portkey provider
โ”‚   โ”‚   โ”œโ”€โ”€ openai.py            # OpenAI provider
โ”‚   โ”‚   โ”œโ”€โ”€ anthropic.py         # Anthropic provider
โ”‚   โ”‚   โ””โ”€โ”€ ollama.py            # Ollama provider
โ”‚   โ”œโ”€โ”€ utils/                   # Utility classes
โ”‚   โ”‚   โ”œโ”€โ”€ enhanced_fetch.py    # Smart fetching
โ”‚   โ”‚   โ”œโ”€โ”€ rate_limiter.py      # Rate limiting
โ”‚   โ”‚   โ”œโ”€โ”€ text_processor.py    # Text processing
โ”‚   โ”‚   โ””โ”€โ”€ json_processor.py    # JSON handling
โ”‚   โ”œโ”€โ”€ models/                  # Data models
โ”‚   โ”‚   โ””โ”€โ”€ types.py             # Type definitions
โ”‚   โ””โ”€โ”€ dom_capture/             # Browser automation
โ”‚       โ”œโ”€โ”€ universal_capture.py # DOM interaction
โ”‚       โ””โ”€โ”€ scroll.py            # Scrolling logic
โ”œโ”€โ”€ tests/                       # Test files
โ”œโ”€โ”€ docs/                        # Documentation
โ”œโ”€โ”€ pyproject.toml               # Project configuration
โ””โ”€โ”€ README.md                    # This file
```

## ๐Ÿ“– Examples

### Real Website Extraction

Test with a real website (example using Chelsea FC):

```python
import asyncio
from web_maestro import fetch_rendered_html, SessionContext
from web_maestro.providers.portkey import PortkeyProvider
from web_maestro import LLMConfig

async def extract_chelsea_info():
    # Configure provider
    config = LLMConfig(
        provider="portkey",
        api_key="your-key",
        model="gpt-4o",
        base_url="your-endpoint",
        extra_params={"virtual_key": "your-virtual-key"}
    )

    provider = PortkeyProvider(config)

    # Extract website content
    ctx = SessionContext()
    blocks = await fetch_rendered_html("https://www.chelseafc.com/en", ctx)

    if blocks:
        # Analyze with AI
        content = "\n".join([block.content for block in blocks[:50]])

        response = await provider.complete(f"""
        Extract soccer information from this Chelsea FC website:
        1. Latest news and match updates
        2. Upcoming fixtures
        3. Team news

        Website content:
        {content[:5000]}
        """)

        if response.success:
            print("โšฝ Extracted Information:")
            print(response.content)

asyncio.run(extract_chelsea_info())
```

## ๐Ÿค Contributing

We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.

### Quick Development Setup

```bash
# Clone the repository
git clone https://github.com/fede-dash/web-maestro.git
cd web-maestro

# Automated setup (recommended)
./scripts/setup-dev.sh

# Or manual setup
pip install hatch
hatch run install-dev
hatch run install-hooks
```

### Development Commands

```bash
# Code quality (using Hatch - recommended)
hatch run format      # Format code with Black and Ruff
hatch run lint        # Run linting checks
hatch run check       # Run all quality checks

# Testing
hatch run test        # Run tests
hatch run test-cov    # Run tests with coverage

# Or use Make commands
make format           # Format code
make lint            # Run linting
make check           # Run all checks
make test            # Run tests
make dev-setup       # Full development setup
```

### Pre-commit Hooks

Pre-commit hooks are **mandatory** and will run automatically:

```bash
# Install hooks (done automatically by setup script)
hatch run install-hooks

# Run hooks manually
pre-commit run --all-files
```

## ๐Ÿ“„ License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## ๐Ÿ“ˆ Version History

### v1.0.0 (Current)
- โœ… **Initial Release**: Production-ready web content extraction
- โœ… **Multi-Provider LLM Support**: OpenAI, Anthropic, Portkey, Ollama
- โœ… **Browser Automation**: Full Playwright integration
- โœ… **Streaming Support**: Real-time response streaming
- โœ… **Type Safety**: Comprehensive type hints throughout
- โœ… **Session Management**: Proper browser context handling

### ๐Ÿš€ Coming Soon
- **v1.1.0**: WebActions framework for intelligent web automation
- **v1.2.0**: Advanced caching and rate limiting
- **v1.3.0**: Plugin architecture and custom providers

## ๐Ÿ†˜ Support & Contact

- **PyPI Package**: [web-maestro on PyPI](https://pypi.org/project/web-maestro/)
- **Issues**: [GitHub Issues](https://github.com/fede-dash/web-maestro/issues)
- **Questions**: Create a discussion or issue
- **Documentation**: [GitHub Repository](https://github.com/fede-dash/web-maestro)
- **Email**: For enterprise support inquiries

## ๐Ÿ”— Related Projects

- **Playwright**: Browser automation framework
- **Beautiful Soup**: HTML parsing library
- **aiohttp**: Async HTTP client
- **Pydantic**: Data validation and settings management

---

<div align="center">

**Web Maestro - Intelligent Web Content Extraction**

[โญ Star us on GitHub](https://github.com/fede-dash/web-maestro) | [๐Ÿ“š Documentation](docs/) | [๐Ÿ› Report Issue](https://github.com/fede-dash/web-maestro/issues)

</div>

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "web-maestro",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Maestro Team <team@maestro.dev>",
    "keywords": "ai, anthropic, async, automation, browser-automation, claude, content-extraction, data-extraction, gpt, llm, multi-provider, openai, playwright, streaming, web-crawler, web-scraping",
    "author": null,
    "author_email": "Maestro Team <team@maestro.dev>",
    "download_url": "https://files.pythonhosted.org/packages/cc/38/0a505d979adb2f8313c49fbf5d91ab321dd8e46ff6c534a6aec3d0bcf6cc/web_maestro-1.0.0.tar.gz",
    "platform": null,
    "description": "# \ud83c\udf10 Web Maestro\n\n[![PyPI version](https://badge.fury.io/py/web-maestro.svg)](https://badge.fury.io/py/web-maestro)\n[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Downloads](https://pepy.tech/badge/web-maestro)](https://pepy.tech/project/web-maestro)\n\n**Production-ready web content extraction with multi-provider LLM support and intelligent browser automation.**\n\nWeb Maestro is a Python library that combines advanced web scraping capabilities with AI-powered content analysis. It provides browser automation using Playwright and integrates with multiple LLM providers for intelligent content extraction and analysis.\n\n## \ud83d\udd25 Real-World Example: Smart Baseball Data Pipeline\n\nImagine you need to build a comprehensive baseball analytics system that monitors multiple sports websites, extracts game statistics, player performance data, and news updates in real-time. Web Maestro makes this incredibly simple:\n\n```python\nimport asyncio\nfrom web_maestro import WebMaestro, LLMConfig\n\nasync def smart_baseball_crawler():\n    # Configure your AI-powered crawler\n    config = LLMConfig(\n        provider=\"openai\",  # or anthropic, portkey, ollama\n        api_key=\"your-api-key\",\n        model=\"gpt-4o\"\n    )\n\n    maestro = WebMaestro(config)\n\n    # Define what you want to extract\n    extraction_prompt = \"\"\"\n    Extract baseball data and structure it as JSON:\n    - Game scores and schedules\n    - Player statistics (batting avg, ERA, etc.)\n    - Injury reports and roster changes\n    - Latest news headlines\n\n    Focus on actionable data for fantasy baseball decisions.\n    \"\"\"\n\n    # Crawl multiple sources intelligently\n    sources = [\n        \"https://www.espn.com/mlb/\",\n        \"https://www.mlb.com/\",\n        \"https://www.baseball-reference.com/\"\n    ]\n\n    for url in sources:\n        # AI automatically understands site structure and extracts relevant data\n        result = await maestro.extract_structured_data(\n            url=url,\n            prompt=extraction_prompt,\n            output_format=\"json\"\n        )\n\n        if result.success:\n            print(f\"\ud83d\udcca Extracted from {url}:\")\n            print(f\"\u26be Games: {len(result.data.get('games', []))}\")\n            print(f\"\ud83d\udc64 Players: {len(result.data.get('players', []))}\")\n            print(f\"\ud83d\udcf0 News: {len(result.data.get('news', []))}\")\n\n            # Data is automatically structured and ready for your database\n            await save_to_database(result.data)\n\n# Run your intelligent baseball pipeline\nasyncio.run(smart_baseball_crawler())\n```\n\n**Why This Example Matters:**\n- \ud83e\udde0 **AI-Powered**: No manual CSS selectors or HTML parsing - AI understands content contextually\n- \ud83d\ude80 **Production Ready**: Handles dynamic content, JavaScript-heavy sites, and rate limiting automatically\n- \ud83d\udd04 **Adaptive**: Works across different sports sites without code changes\n- \ud83d\udcca **Structured Output**: Returns clean, structured data ready for analysis or storage\n\n## \ud83c\udf1f Key Features\n\n### \ud83d\ude80 **Advanced Web Extraction**\n- **Browser Automation**: Powered by Playwright for handling dynamic content and JavaScript-heavy sites\n- **DOM Capture**: Intelligent element interaction including clicks, hovers, and content discovery\n- **Session Management**: Proper context management for complex extraction workflows\n\n### \ud83e\udd16 **Multi-Provider LLM Support**\n- **Universal Interface**: Works with OpenAI, Anthropic Claude, Portkey, and Ollama\n- **Streaming Support**: Real-time content delivery for better user experience\n- **Intelligent Analysis**: AI-powered content extraction and structuring\n\n### \ud83d\udd27 **Developer Experience**\n- **Clean API**: Intuitive, well-documented interface\n- **Type Safety**: Full type hints and Pydantic models\n- **Async Support**: Built for modern async/await patterns\n- **Extensible**: Modular architecture for custom providers\n\n## \ud83d\udce6 Installation\n\n### Basic Installation\n```bash\npip install web-maestro\n```\n\n### Quick Verification\nAfter installation, verify everything works:\n\n```python\n# Test basic import\nfrom web_maestro import LLMConfig, SessionContext\nprint(\"\u2705 Web Maestro installed successfully!\")\n\n# Check available providers\nfrom web_maestro.providers.factory import ProviderRegistry\nprint(f\"\ud83d\udce6 Available providers: {ProviderRegistry.list_providers()}\")\n```\n\n### With Specific LLM Provider\nChoose your preferred AI provider:\n\n```bash\n# For OpenAI GPT models\npip install \"web-maestro[openai]\"\n\n# For Anthropic Claude models\npip install \"web-maestro[anthropic]\"\n\n# For Portkey AI gateway\npip install \"web-maestro[portkey]\"\n\n# For local Ollama models\npip install \"web-maestro[ollama]\"\n\n# Install all providers\npip install \"web-maestro[all-providers]\"\n```\n\n### System Dependencies\n\nWeb Maestro requires **Poppler** for PDF processing functionality:\n\n**macOS (Homebrew):**\n```bash\nbrew install poppler\n```\n\n**Ubuntu/Debian:**\n```bash\nsudo apt-get install poppler-utils\n```\n\n**Windows:**\nDownload from: https://blog.alivate.com.au/poppler-windows/\n\n### Development Installation\n\n**Quick Setup (Recommended):**\n```bash\ngit clone https://github.com/fede-dash/web-maestro.git\ncd web-maestro\n\n# Automated setup - installs system deps, Python deps, and browsers\nhatch run setup-dev\n```\n\n**Manual Setup:**\n```bash\ngit clone https://github.com/fede-dash/web-maestro.git\ncd web-maestro\n\n# Install system dependencies\nbrew install poppler  # macOS\n# sudo apt-get install poppler-utils  # Linux\n\n# Install Python dependencies\npip install -e \".[dev,all-features]\"\n\n# Install browsers for Playwright\nplaywright install\n\n# Setup pre-commit hooks\npre-commit install\n```\n\n**Available Hatch Scripts:**\n```bash\n# Full system and dev setup\nhatch run setup-dev\n\n# Install just system dependencies\nhatch run setup-system\n\n# Full setup for production use\nhatch run setup-full\n\n# Run tests\nhatch run test\n\n# Run tests with coverage\nhatch run test-cov\n\n# Format and lint code\nhatch run format\nhatch run lint\n```\n\n## \ud83d\ude80 Quick Start\n\n### Basic Web Content Extraction\n\n```python\nimport asyncio\nfrom web_maestro import fetch_rendered_html, SessionContext\nfrom web_maestro.providers.portkey import PortkeyProvider\nfrom web_maestro import LLMConfig\n\nasync def extract_content():\n    # Configure your LLM provider\n    config = LLMConfig(\n        provider=\"portkey\",\n        api_key=\"your-api-key\",\n        model=\"gpt-4\",\n        base_url=\"your-portkey-endpoint\",\n        extra_params={\"virtual_key\": \"your-virtual-key\"}\n    )\n\n    provider = PortkeyProvider(config)\n\n    # Extract content using browser automation\n    ctx = SessionContext()\n    blocks = await fetch_rendered_html(\n        url=\"https://example.com\",\n        ctx=ctx\n    )\n\n    if blocks:\n        # Combine extracted content\n        content = \"\\n\".join([block.content for block in blocks[:20]])\n\n        # Analyze with AI\n        response = await provider.complete(\n            f\"Analyze this content and extract key information:\\n{content[:5000]}\"\n        )\n\n        if response.success:\n            print(\"Extracted content:\", response.content)\n        else:\n            print(\"Error:\", response.error)\n\nasyncio.run(extract_content())\n```\n\n### Streaming Content Analysis\n\n```python\nimport asyncio\nfrom web_maestro.providers.portkey import PortkeyProvider\nfrom web_maestro import LLMConfig\n\nasync def stream_analysis():\n    config = LLMConfig(\n        provider=\"portkey\",\n        api_key=\"your-api-key\",\n        model=\"gpt-4\",\n        base_url=\"your-endpoint\",\n        extra_params={\"virtual_key\": \"your-virtual-key\"}\n    )\n\n    provider = PortkeyProvider(config)\n\n    # Stream response chunks in real-time\n    prompt = \"Write a detailed analysis of modern web scraping techniques.\"\n\n    async for chunk in provider.complete_stream(prompt):\n        print(chunk, end=\"\", flush=True)\n\nasyncio.run(stream_analysis())\n```\n\n### Using Enhanced Fetcher\n\n```python\nimport asyncio\nfrom web_maestro.utils import EnhancedFetcher\n\nasync def fetch_with_caching():\n    # Create fetcher with intelligent caching\n    fetcher = EnhancedFetcher(cache_ttl=300)  # 5-minute cache\n\n    # Attempt static fetch first, fallback to browser if needed\n    blocks = await fetcher.try_static_first(\"https://example.com\")\n\n    print(f\"Fetched {len(blocks)} content blocks\")\n    for block in blocks[:5]:\n        print(f\"[{block.content_type}] {block.content[:100]}...\")\n\nasyncio.run(fetch_with_caching())\n```\n\n## \ud83c\udfaf Current Capabilities\n\n### \u2705 **What's Working**\n- **Browser Automation**: Full Playwright integration for dynamic content\n- **Multi-Provider LLM**: OpenAI, Anthropic, Portkey, and Ollama support\n- **Streaming**: Real-time response streaming from LLM providers\n- **Content Extraction**: DOM capture with multiple content types\n- **Session Management**: Proper browser context and session handling\n- **Type Safety**: Comprehensive type hints throughout the codebase\n\n### \ud83d\udea7 **In Development**\n- **WebMaestro Class**: High-level orchestration (basic implementation exists)\n- **Advanced DOM Interaction**: Tab expansion, hover detection (framework exists)\n- **Rate Limiting**: Smart request throttling (utility classes available)\n- **Caching Layer**: Response caching with TTL (basic implementation exists)\n\n### \ud83d\udccb **Planned Features**\n- Comprehensive test suite\n- Advanced error recovery\n- Performance monitoring\n- Plugin architecture\n- Documentation website\n\n## \ud83d\udd2e **Future Roadmap: WebActions Framework**\n\n> **\ud83d\udea7 Coming Soon**: Web Maestro is evolving beyond content extraction into intelligent web automation with **WebActions** - a revolutionary framework for automated web interactions.\n\nThe next major evolution will introduce **WebActions** - an intelligent automation framework that extends beyond content extraction to include sophisticated web interaction capabilities:\n\n### \ud83c\udfaf **Planned WebActions Features:**\n- **\ud83e\udd16 Intelligent Form Automation**: AI-driven form completion with context understanding and validation\n- **\ud83d\udd04 Complex Workflow Execution**: Multi-step web processes with decision trees and conditional logic\n- **\ud83d\udcf1 Interactive Element Management**: Smart handling of dropdowns, modals, and dynamic UI components\n- **\ud83d\udd10 Authentication Workflows**: Automated login sequences with credential management and session persistence\n- **\ud83d\udcca Data Submission Pipelines**: Intelligent data entry with validation and error handling\n- **\ud83c\udfae Game-like Interactions**: Advanced interaction patterns for complex web applications\n- **\ud83e\udde0 Action Learning**: Machine learning-based action optimization and pattern recognition\n\n### \ud83c\udf1f **WebActions Vision:**\nWebActions will transform Web Maestro from a content extraction tool into a comprehensive web automation agent capable of performing complex interactions while maintaining the same level of intelligence and adaptability demonstrated in current content analysis features. This evolution will enable use cases such as:\n\n- **Automated Data Entry**: Intelligent form completion across multiple systems\n- **Complex Multi-Step Workflows**: End-to-end process automation with decision making\n- **Intelligent Web Application Testing**: AI-driven testing with adaptive scenarios\n- **Dynamic Content Management**: Automated content publishing and management workflows\n\n**Beta Status**: The current version focuses on content extraction and analysis. WebActions capabilities are in active development and will be released in future versions.\n\n## \ud83d\udd27 Configuration\n\n### LLM Provider Setup\n\n```python\nfrom web_maestro import LLMConfig\n\n# OpenAI Configuration\nopenai_config = LLMConfig(\n    provider=\"openai\",\n    api_key=\"sk-...\",\n    model=\"gpt-4\",\n    temperature=0.7,\n    max_tokens=2000\n)\n\n# Portkey Configuration (with gateway)\nportkey_config = LLMConfig(\n    provider=\"portkey\",\n    api_key=\"your-portkey-key\",\n    model=\"gpt-4\",\n    base_url=\"https://your-gateway.com/v1\",\n    extra_params={\n        \"virtual_key\": \"your-virtual-key\"\n    }\n)\n\n# Anthropic Configuration\nanthropic_config = LLMConfig(\n    provider=\"anthropic\",\n    api_key=\"sk-ant-...\",\n    model=\"claude-3-sonnet\",\n    temperature=0.5\n)\n```\n\n### Browser Configuration\n\n```python\nbrowser_config = {\n    \"headless\": True,\n    \"timeout_ms\": 30000,\n    \"viewport\": {\"width\": 1920, \"height\": 1080},\n    \"max_scrolls\": 15,\n    \"max_elements_to_click\": 25,\n    \"stability_timeout_ms\": 2000\n}\n\nblocks = await fetch_rendered_html(\n    url=\"https://complex-spa.com\",\n    ctx=ctx,\n    config=browser_config\n)\n```\n\n## \ud83d\udcda API Overview\n\n### Core Functions\n\n```python\n# Browser-based content extraction\nfrom web_maestro import fetch_rendered_html, SessionContext\n\nctx = SessionContext()\nblocks = await fetch_rendered_html(url, ctx, config)\n```\n\n### Provider Classes\n\n```python\n# All providers implement the same interface\nfrom web_maestro.providers.portkey import PortkeyProvider\n\nprovider = PortkeyProvider(config)\nresponse = await provider.complete(prompt)\nstream = provider.complete_stream(prompt)\n```\n\n### Utility Classes\n\n```python\n# Enhanced fetching with caching\nfrom web_maestro.utils import EnhancedFetcher, RateLimiter\n\nfetcher = EnhancedFetcher(cache_ttl=300)\nrate_limiter = RateLimiter(max_requests=10, time_window=60)\n```\n\n### Data Models\n\n```python\n# Structured data types\nfrom web_maestro.models.types import CapturedBlock, CaptureType\nfrom web_maestro.providers.base import LLMResponse, ModelCapability\n```\n\n## \ud83d\udee1\ufe0f Error Handling\n\n```python\ntry:\n    response = await provider.complete(\"Your prompt\")\n\n    if response.success:\n        print(\"Response:\", response.content)\n        print(f\"Tokens used: {response.total_tokens}\")\n    else:\n        print(\"Error:\", response.error)\n\nexcept Exception as e:\n    print(f\"Unexpected error: {e}\")\n```\n\n## \ud83d\udd04 Streaming Support\n\n```python\n# Stream responses for real-time delivery\nasync for chunk in provider.complete_stream(\"Your prompt\"):\n    print(chunk, end=\"\", flush=True)\n\n# Chat streaming\nmessages = [{\"role\": \"user\", \"content\": \"Hello\"}]\nasync for chunk in provider.complete_chat_stream(messages):\n    print(chunk, end=\"\", flush=True)\n```\n\n## \ud83e\uddea Testing Your Setup\n\n```python\n# Test provider connectivity\nimport asyncio\nfrom web_maestro.providers.openai import OpenAIProvider\nfrom web_maestro import LLMConfig\n\nasync def test_setup():\n    config = LLMConfig(\n        provider=\"openai\",\n        api_key=\"your-openai-api-key\",\n        model=\"gpt-3.5-turbo\"\n    )\n\n    provider = OpenAIProvider(config)\n    response = await provider.complete(\"Hello, world!\")\n\n    if response.success:\n        print(\"\u2705 Provider working:\", response.content)\n    else:\n        print(\"\u274c Provider failed:\", response.error)\n\n# Run the test\nasyncio.run(test_setup())\n```\n\n## \ud83d\udee0\ufe0f Troubleshooting\n\n### Common Issues and Solutions\n\n**Import Error: \"No module named 'web_maestro'\"**\n```bash\n# Make sure you installed the package\npip install web-maestro\n\n# If using conda\nconda install -c conda-forge web-maestro  # Not yet available\n```\n\n**Browser Dependencies Missing**\n```bash\n# Install Playwright browsers\nplaywright install\n\n# On Linux, you might need additional dependencies\nsudo apt-get install libnss3 libxss1 libasound2\n```\n\n**PDF Processing Issues**\n```bash\n# Install Poppler (required for PDF processing)\n# macOS\nbrew install poppler\n\n# Ubuntu/Debian\nsudo apt-get install poppler-utils\n\n# Windows: Download from https://blog.alivate.com.au/poppler-windows/\n```\n\n**LLM Provider Authentication**\n```python\n# Verify your API keys are set correctly\nimport os\nprint(\"OpenAI API Key:\", os.getenv(\"OPENAI_API_KEY\", \"Not set\"))\nprint(\"Anthropic API Key:\", os.getenv(\"ANTHROPIC_API_KEY\", \"Not set\"))\n```\n\n**Rate Limiting Issues**\n```python\n# Use built-in rate limiting\nfrom web_maestro.utils import RateLimiter\n\nrate_limiter = RateLimiter(max_requests=10, time_window=60)\nawait rate_limiter.acquire()  # Will wait if needed\n```\n\n## \ud83d\udcc1 Project Structure\n\n```\nweb-maestro/\n\u251c\u2500\u2500 src/web_maestro/\n\u2502   \u251c\u2500\u2500 __init__.py              # Main exports\n\u2502   \u251c\u2500\u2500 multi_provider.py        # WebMaestro orchestrator\n\u2502   \u251c\u2500\u2500 fetch.py                 # Core fetching logic\n\u2502   \u251c\u2500\u2500 context.py               # Session management\n\u2502   \u251c\u2500\u2500 providers/               # LLM provider implementations\n\u2502   \u2502   \u251c\u2500\u2500 base.py              # Base provider interface\n\u2502   \u2502   \u251c\u2500\u2500 portkey.py           # Portkey provider\n\u2502   \u2502   \u251c\u2500\u2500 openai.py            # OpenAI provider\n\u2502   \u2502   \u251c\u2500\u2500 anthropic.py         # Anthropic provider\n\u2502   \u2502   \u2514\u2500\u2500 ollama.py            # Ollama provider\n\u2502   \u251c\u2500\u2500 utils/                   # Utility classes\n\u2502   \u2502   \u251c\u2500\u2500 enhanced_fetch.py    # Smart fetching\n\u2502   \u2502   \u251c\u2500\u2500 rate_limiter.py      # Rate limiting\n\u2502   \u2502   \u251c\u2500\u2500 text_processor.py    # Text processing\n\u2502   \u2502   \u2514\u2500\u2500 json_processor.py    # JSON handling\n\u2502   \u251c\u2500\u2500 models/                  # Data models\n\u2502   \u2502   \u2514\u2500\u2500 types.py             # Type definitions\n\u2502   \u2514\u2500\u2500 dom_capture/             # Browser automation\n\u2502       \u251c\u2500\u2500 universal_capture.py # DOM interaction\n\u2502       \u2514\u2500\u2500 scroll.py            # Scrolling logic\n\u251c\u2500\u2500 tests/                       # Test files\n\u251c\u2500\u2500 docs/                        # Documentation\n\u251c\u2500\u2500 pyproject.toml               # Project configuration\n\u2514\u2500\u2500 README.md                    # This file\n```\n\n## \ud83d\udcd6 Examples\n\n### Real Website Extraction\n\nTest with a real website (example using Chelsea FC):\n\n```python\nimport asyncio\nfrom web_maestro import fetch_rendered_html, SessionContext\nfrom web_maestro.providers.portkey import PortkeyProvider\nfrom web_maestro import LLMConfig\n\nasync def extract_chelsea_info():\n    # Configure provider\n    config = LLMConfig(\n        provider=\"portkey\",\n        api_key=\"your-key\",\n        model=\"gpt-4o\",\n        base_url=\"your-endpoint\",\n        extra_params={\"virtual_key\": \"your-virtual-key\"}\n    )\n\n    provider = PortkeyProvider(config)\n\n    # Extract website content\n    ctx = SessionContext()\n    blocks = await fetch_rendered_html(\"https://www.chelseafc.com/en\", ctx)\n\n    if blocks:\n        # Analyze with AI\n        content = \"\\n\".join([block.content for block in blocks[:50]])\n\n        response = await provider.complete(f\"\"\"\n        Extract soccer information from this Chelsea FC website:\n        1. Latest news and match updates\n        2. Upcoming fixtures\n        3. Team news\n\n        Website content:\n        {content[:5000]}\n        \"\"\")\n\n        if response.success:\n            print(\"\u26bd Extracted Information:\")\n            print(response.content)\n\nasyncio.run(extract_chelsea_info())\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Quick Development Setup\n\n```bash\n# Clone the repository\ngit clone https://github.com/fede-dash/web-maestro.git\ncd web-maestro\n\n# Automated setup (recommended)\n./scripts/setup-dev.sh\n\n# Or manual setup\npip install hatch\nhatch run install-dev\nhatch run install-hooks\n```\n\n### Development Commands\n\n```bash\n# Code quality (using Hatch - recommended)\nhatch run format      # Format code with Black and Ruff\nhatch run lint        # Run linting checks\nhatch run check       # Run all quality checks\n\n# Testing\nhatch run test        # Run tests\nhatch run test-cov    # Run tests with coverage\n\n# Or use Make commands\nmake format           # Format code\nmake lint            # Run linting\nmake check           # Run all checks\nmake test            # Run tests\nmake dev-setup       # Full development setup\n```\n\n### Pre-commit Hooks\n\nPre-commit hooks are **mandatory** and will run automatically:\n\n```bash\n# Install hooks (done automatically by setup script)\nhatch run install-hooks\n\n# Run hooks manually\npre-commit run --all-files\n```\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udcc8 Version History\n\n### v1.0.0 (Current)\n- \u2705 **Initial Release**: Production-ready web content extraction\n- \u2705 **Multi-Provider LLM Support**: OpenAI, Anthropic, Portkey, Ollama\n- \u2705 **Browser Automation**: Full Playwright integration\n- \u2705 **Streaming Support**: Real-time response streaming\n- \u2705 **Type Safety**: Comprehensive type hints throughout\n- \u2705 **Session Management**: Proper browser context handling\n\n### \ud83d\ude80 Coming Soon\n- **v1.1.0**: WebActions framework for intelligent web automation\n- **v1.2.0**: Advanced caching and rate limiting\n- **v1.3.0**: Plugin architecture and custom providers\n\n## \ud83c\udd98 Support & Contact\n\n- **PyPI Package**: [web-maestro on PyPI](https://pypi.org/project/web-maestro/)\n- **Issues**: [GitHub Issues](https://github.com/fede-dash/web-maestro/issues)\n- **Questions**: Create a discussion or issue\n- **Documentation**: [GitHub Repository](https://github.com/fede-dash/web-maestro)\n- **Email**: For enterprise support inquiries\n\n## \ud83d\udd17 Related Projects\n\n- **Playwright**: Browser automation framework\n- **Beautiful Soup**: HTML parsing library\n- **aiohttp**: Async HTTP client\n- **Pydantic**: Data validation and settings management\n\n---\n\n<div align=\"center\">\n\n**Web Maestro - Intelligent Web Content Extraction**\n\n[\u2b50 Star us on GitHub](https://github.com/fede-dash/web-maestro) | [\ud83d\udcda Documentation](docs/) | [\ud83d\udc1b Report Issue](https://github.com/fede-dash/web-maestro/issues)\n\n</div>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Production-ready web content extraction with multi-provider LLM support and intelligent browser automation",
    "version": "1.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/fede-dash/web-maestro/issues",
        "Documentation": "https://web-maestro.readthedocs.io",
        "Homepage": "https://github.com/fede-dash/web-maestro",
        "Repository": "https://github.com/fede-dash/web-maestro.git"
    },
    "split_keywords": [
        "ai",
        " anthropic",
        " async",
        " automation",
        " browser-automation",
        " claude",
        " content-extraction",
        " data-extraction",
        " gpt",
        " llm",
        " multi-provider",
        " openai",
        " playwright",
        " streaming",
        " web-crawler",
        " web-scraping"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7077131dc2e946cf2fb6d30eb23c203e6978456ccd4ba073021574d4b50f93bd",
                "md5": "b3f76a2949bb4d4c3094d010a92831c9",
                "sha256": "c4db13cedf2875f98c696b11d7f14c6b0880a4cb0c2bec92da6de1a33278b1f7"
            },
            "downloads": -1,
            "filename": "web_maestro-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b3f76a2949bb4d4c3094d010a92831c9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 203662,
            "upload_time": "2025-07-15T01:39:52",
            "upload_time_iso_8601": "2025-07-15T01:39:52.522114Z",
            "url": "https://files.pythonhosted.org/packages/70/77/131dc2e946cf2fb6d30eb23c203e6978456ccd4ba073021574d4b50f93bd/web_maestro-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "cc380a505d979adb2f8313c49fbf5d91ab321dd8e46ff6c534a6aec3d0bcf6cc",
                "md5": "0b49581764da3c64064a8b0a42c16377",
                "sha256": "65141490350493c2e4b85193c0f868948b8639b0c377174af3e17873912a7642"
            },
            "downloads": -1,
            "filename": "web_maestro-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0b49581764da3c64064a8b0a42c16377",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 201727,
            "upload_time": "2025-07-15T01:39:53",
            "upload_time_iso_8601": "2025-07-15T01:39:53.767081Z",
            "url": "https://files.pythonhosted.org/packages/cc/38/0a505d979adb2f8313c49fbf5d91ab321dd8e46ff6c534a6aec3d0bcf6cc/web_maestro-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-15 01:39:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fede-dash",
    "github_project": "web-maestro",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "web-maestro"
}
        
Elapsed time: 0.87113s