# Browse-to-Test
**AI-Powered Browser Automation to Test Script Converter**
Browse-to-Test is a Python library that uses AI to convert browser automation data into test scripts for various testing frameworks (Playwright, Selenium, Cypress, etc.). It provides an intelligent, configurable, and extensible way to transform recorded browser interactions into maintainable test code.
## 🌟 Features
- **🤖 AI-Powered Conversion**: Uses OpenAI, Anthropic, or other AI providers to intelligently convert automation data
- **🧠 Context-Aware Generation**: Leverages existing tests, documentation, and system knowledge for intelligent analysis
- **🔌 Multi-Framework Support**: Generate tests for Playwright, Selenium, Cypress, and more
- **🏗️ Plugin Architecture**: Easily extensible with custom plugins for new frameworks/languages
- **⚙️ Highly Configurable**: Comprehensive configuration system for fine-tuning output
- **🔍 Smart Analysis**: AI-powered action analysis and optimization with system context
- **📚 System Intelligence**: Analyzes existing tests, UI components, API endpoints, and documentation
- **🎯 Pattern Recognition**: Identifies similar tests and reuses established patterns
- **🔐 Sensitive Data Handling**: Automatic detection and secure handling of sensitive information
- **📊 Validation & Preview**: Built-in validation and preview capabilities with context insights
- **🚀 Easy to Use**: Simple API with sensible defaults and intelligent recommendations
## 🚀 Quick Start
### Installation
```bash
# Basic installation
pip install browse-to-test
# With AI providers
pip install browse-to-test[openai,anthropic]
# With testing frameworks
pip install browse-to-test[playwright,selenium]
# Full installation
pip install browse-to-test[all]
```
### Basic Usage
```python
import browse_to_test as btt
# Your browser automation data
automation_data = [
{
"model_output": {
"action": [{"go_to_url": {"url": "https://example.com"}}]
},
"state": {"interacted_element": []}
},
# ... more steps
]
# Convert to Playwright test script - one line!
script = btt.convert(automation_data, framework="playwright", ai_provider="openai")
print(script)
```
### Advanced Usage with ConfigBuilder
```python
import browse_to_test as btt
# Build custom configuration with fluent interface
config = btt.ConfigBuilder() \
.framework("playwright") \
.ai_provider("openai", model="gpt-4") \
.language("python") \
.include_assertions(True) \
.include_error_handling(True) \
.sensitive_data_keys(["username", "password"]) \
.enable_context_collection() \
.thorough_mode() \
.build()
# Create converter with custom config
converter = btt.E2eTestConverter(config)
script = converter.convert(automation_data)
```
### Incremental Live Generation
```python
import browse_to_test as btt
# Start incremental session
config = btt.ConfigBuilder().framework("playwright").build()
session = btt.IncrementalSession(config)
# Start session
result = session.start("https://example.com")
# Add steps as they happen
for step_data in automation_steps:
result = session.add_step(step_data)
print(f"Current script:\n{result.current_script}")
# Finalize when done
final = session.finalize()
print(f"Complete test script:\n{final.current_script}")
```
### Context-Aware Generation
```python
import browse_to_test as btt
# Enable context-aware features
config = btt.Config(
ai=btt.AIConfig(
provider="openai",
model="gpt-4",
),
output=btt.OutputConfig(
framework="playwright",
include_assertions=True,
include_error_handling=True,
),
processing=btt.ProcessingConfig(
# Enable intelligent analysis with system context
analyze_actions_with_ai=True,
collect_system_context=True,
use_intelligent_analysis=True,
# Context collection settings
include_existing_tests=True,
include_documentation=True,
include_ui_components=True,
include_api_endpoints=True,
# Analysis settings
context_analysis_depth="deep",
max_similar_tests=5,
context_similarity_threshold=0.3,
),
project_root=".", # Project root for context collection
verbose=True
)
orchestrator = btt.E2eScriptOrchestrator(config)
# Generate context-aware test script
test_script = orchestrator.generate_test_script(
automation_data=automation_data,
target_url="https://example.com/login",
context_hints={
"flow_type": "authentication",
"critical_elements": ["username", "password", "submit"]
}
)
# Preview with context insights
preview = orchestrator.preview_conversion(
automation_data=automation_data,
target_url="https://example.com/login"
)
print(f"Similar tests found: {len(preview.get('similar_tests', []))}")
print(f"Context quality score: {preview.get('estimated_quality_score', 0)}")
```
## 📖 Documentation
### Core Concepts
#### 1. Input Data Format
Browse-to-Test expects browser automation data in a specific JSON format:
```json
[
{
"model_output": {
"action": [
{
"action_type": {
"parameter1": "value1",
"parameter2": "value2"
}
}
]
},
"state": {
"interacted_element": [
{
"xpath": "//button[@id='submit']",
"css_selector": "button#submit",
"attributes": {"id": "submit", "type": "button"}
}
]
},
"metadata": {
"step_start_time": 1640995200.0,
"elapsed_time": 2.5
}
}
]
```
#### 2. Supported Actions
| Action Type | Description | Parameters |
|-------------|-------------|------------|
| `go_to_url` | Navigate to URL | `url` |
| `input_text` | Enter text in field | `text`, `index` |
| `click_element` | Click an element | `index` |
| `scroll_down` | Scroll page down | `amount` (optional) |
| `scroll_up` | Scroll page up | `amount` (optional) |
| `wait` | Wait for time | `seconds` |
| `done` | Mark completion | `text`, `success` |
#### 3. Configuration System
The library uses a hierarchical configuration system:
```python
config = btt.Config(
# AI Provider Settings
ai=btt.AIConfig(
provider="openai", # openai, anthropic, azure, local
model="gpt-4", # Provider-specific model
temperature=0.1, # Generation randomness (0-2)
max_tokens=4000, # Maximum response tokens
api_key="your-key", # API key (or use env vars)
),
# Output Settings
output=btt.OutputConfig(
framework="playwright", # Target framework
language="python", # Target language
test_type="script", # script, test, spec
include_assertions=True, # Add test assertions
include_waits=True, # Add explicit waits
include_error_handling=True,# Add try-catch blocks
include_logging=True, # Add logging statements
sensitive_data_keys=["username", "password"],
mask_sensitive_data=True, # Mask sensitive data
),
# Processing Settings
processing=btt.ProcessingConfig(
analyze_actions_with_ai=True, # Use AI for analysis
optimize_selectors=True, # Optimize CSS/XPath selectors
validate_actions=True, # Validate action feasibility
strict_mode=False, # Fail on any errors
# Context Collection Settings
collect_system_context=True, # Enable context collection
use_intelligent_analysis=True, # Use AI with system context
include_existing_tests=True, # Analyze existing test files
include_documentation=True, # Include project documentation
include_ui_components=True, # Analyze UI component files
include_api_endpoints=True, # Include API endpoint info
include_recent_changes=True, # Consider recent git changes
# Context Analysis Settings
context_analysis_depth="deep", # shallow, medium, deep
max_similar_tests=5, # Max similar tests to consider
context_similarity_threshold=0.3, # Similarity threshold (0-1)
max_context_files=100, # Limit files for performance
),
# Global Settings
debug=False,
verbose=False,
log_level="INFO",
)
```
### Environment Variables
Set these environment variables for AI providers:
```bash
# OpenAI
export OPENAI_API_KEY="your-openai-key"
# Anthropic
export ANTHROPIC_API_KEY="your-anthropic-key"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-azure-key"
export AZURE_OPENAI_ENDPOINT="your-azure-endpoint"
# Browse-to-Test specific
export BROWSE_TO_TEST_AI_PROVIDER="openai"
export BROWSE_TO_TEST_OUTPUT_FRAMEWORK="playwright"
export BROWSE_TO_TEST_DEBUG="true"
# Context-aware features
export BROWSE_TO_TEST_PROCESSING_COLLECT_CONTEXT="true"
export BROWSE_TO_TEST_PROCESSING_USE_INTELLIGENT_ANALYSIS="true"
export BROWSE_TO_TEST_PROCESSING_CONTEXT_ANALYSIS_DEPTH="deep"
```
## 🔌 Plugin System
Browse-to-Test uses a plugin architecture to support different frameworks and languages.
### Available Plugins
| Plugin | Frameworks | Languages | Status |
|--------|------------|-----------|--------|
| Playwright | `playwright` | `python` | ✅ Stable |
| Selenium | `selenium`, `webdriver` | `python` | ✅ Stable |
| Cypress | `cypress` | `javascript`, `typescript` | 🚧 Community |
### Creating Custom Plugins
```python
from browse_to_test.plugins.base import OutputPlugin, GeneratedTestScript
class MyCustomPlugin(OutputPlugin):
@property
def plugin_name(self) -> str:
return "my-framework"
@property
def supported_frameworks(self) -> List[str]:
return ["my-framework"]
@property
def supported_languages(self) -> List[str]:
return ["python", "javascript"]
def generate_test_script(self, parsed_data, analysis_results=None):
# Your custom generation logic here
script_content = self._generate_custom_script(parsed_data)
return GeneratedTestScript(
content=script_content,
language=self.config.language,
framework=self.config.framework,
)
# Register your plugin
registry = btt.PluginRegistry()
registry.register_plugin("my-framework", MyCustomPlugin)
```
## 🛠️ API Reference
### Main Functions
#### `convert_to_test_script(automation_data, output_framework, ai_provider, config=None)`
Convert automation data to test script (convenience function).
**Parameters:**
- `automation_data`: List of automation steps or path to JSON file
- `output_framework`: Target framework ("playwright", "selenium", etc.)
- `ai_provider`: AI provider ("openai", "anthropic", etc.)
- `config`: Optional configuration dictionary
**Returns:** Generated test script as string
#### `list_available_plugins()`
List all available output framework plugins.
**Returns:** List of plugin names
#### `list_available_ai_providers()`
List all available AI providers.
**Returns:** List of provider names
### Core Classes
#### `E2eScriptOrchestrator(config)`
Main orchestrator class that coordinates the conversion process.
**Methods:**
- `generate_test_script(automation_data, custom_config=None)`: Generate test script
- `generate_with_multiple_frameworks(automation_data, frameworks)`: Generate for multiple frameworks
- `validate_configuration()`: Validate current configuration
- `preview_conversion(automation_data, max_actions=5)`: Preview conversion
#### `Config`, `AIConfig`, `OutputConfig`, `ProcessingConfig`
Configuration classes for different aspects of the library.
#### `PluginRegistry`
Registry for managing output plugins.
**Methods:**
- `register_plugin(name, plugin_class)`: Register a new plugin
- `create_plugin(config)`: Create plugin instance
- `list_available_plugins()`: List available plugins
## 📊 Examples
### Generate Multiple Frameworks
```python
orchestrator = btt.E2eScriptOrchestrator(config)
scripts = orchestrator.generate_with_multiple_frameworks(
automation_data,
["playwright", "selenium"]
)
for framework, script in scripts.items():
with open(f"test_{framework}.py", "w") as f:
f.write(script)
```
### Load from File
```python
# Save automation data to file
with open("automation_data.json", "w") as f:
json.dump(automation_data, f)
# Load and convert from file
script = btt.convert_to_test_script(
automation_data="automation_data.json",
output_framework="playwright"
)
```
### Custom Configuration
```python
config = {
"ai": {"provider": "anthropic", "model": "claude-3-sonnet"},
"output": {"include_screenshots": True, "add_comments": True},
"processing": {"analyze_actions_with_ai": False}
}
script = btt.convert_to_test_script(
automation_data=data,
output_framework="selenium",
config=config
)
```
### Preview Before Generation
```python
orchestrator = btt.E2eScriptOrchestrator(config)
preview = orchestrator.preview_conversion(automation_data)
print(f"Total steps: {preview['total_steps']}")
print(f"Total actions: {preview['total_actions']}")
print(f"Action types: {preview['action_types']}")
print(f"Validation issues: {preview['validation_issues']}")
print(f"Similar tests: {len(preview.get('similar_tests', []))}")
print(f"Quality score: {preview.get('estimated_quality_score', 0)}")
```
## 🧠 Context-Aware Features
Browse-to-Test includes powerful context-aware capabilities that analyze your existing codebase to generate more intelligent and consistent test scripts.
### How It Works
1. **Context Collection**: Scans your project for existing tests, documentation, UI components, and API endpoints
2. **Pattern Analysis**: Uses AI to understand your project's testing conventions and patterns
3. **Intelligent Generation**: Leverages this context to generate tests that align with your existing codebase
4. **Similarity Detection**: Identifies similar existing tests to avoid duplication and ensure consistency
### What Gets Analyzed
| Component | Description | Benefits |
|-----------|-------------|----------|
| **Existing Tests** | Playwright, Selenium, Cypress test files | Consistent selector patterns, test structure |
| **Documentation** | README, API docs, contributing guides | Project-specific terminology and conventions |
| **UI Components** | React, Vue, Angular component files | Component props, data attributes, CSS classes |
| **API Endpoints** | Route definitions, OpenAPI specs | Endpoint patterns, authentication flows |
| **Recent Changes** | Git history and recent commits | Awareness of recent code changes |
### Context-Aware Benefits
✨ **Consistency**: Generated tests follow your project's established patterns and conventions
🎯 **Intelligence**: AI understands your specific domain, components, and testing strategies
🔍 **Similarity Detection**: Avoids duplicating existing test coverage
⚡ **Optimized Selectors**: Uses selectors that match your project's preferred patterns (data-testid, CSS classes, etc.)
🛡️ **Smart Defaults**: Automatically configures sensitive data handling and test setup based on existing tests
📊 **Quality Insights**: Provides quality scores and recommendations based on project analysis
### Configuration Options
```python
processing=btt.ProcessingConfig(
# Enable context features
collect_system_context=True,
use_intelligent_analysis=True,
# Control what gets analyzed
include_existing_tests=True,
include_documentation=True,
include_ui_components=True,
include_api_endpoints=True,
include_database_schema=False, # More expensive
include_recent_changes=True,
# Fine-tune analysis
context_analysis_depth="deep", # shallow, medium, deep
max_similar_tests=5,
context_similarity_threshold=0.3,
max_context_files=100,
# Performance settings
context_cache_ttl=3600, # Cache for 1 hour
max_context_prompt_size=8000,
)
```
### Performance Optimization
For faster generation, use speed-optimized settings:
```python
config = btt.Config()
config.optimize_for_speed() # Disables heavy context analysis
# Or manually configure
config.processing.collect_system_context = False
config.processing.context_analysis_depth = "shallow"
config.processing.max_context_files = 20
```
For maximum accuracy, use accuracy-optimized settings:
```python
config = btt.Config()
config.optimize_for_accuracy() # Enables deep context analysis
# Or manually configure
config.processing.context_analysis_depth = "deep"
config.processing.max_context_files = 200
config.ai.max_tokens = 8000
```
## 🤝 Contributing
We welcome contributions! Here's how to get started:
### Development Setup
```bash
# Clone the repository
git clone https://github.com/yourusername/browse-to-test.git
cd browse-to-test
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -e .[dev,all]
# Run tests
pytest
# Format code
black browse_to_test/
isort browse_to_test/
# Type checking
mypy browse_to_test/
```
### Creating a Plugin
1. Create a new plugin class inheriting from `OutputPlugin`
2. Implement required methods (`plugin_name`, `supported_frameworks`, etc.)
3. Add tests for your plugin
4. Submit a pull request
### Reporting Issues
Please use the [GitHub issue tracker](https://github.com/yourusername/browse-to-test/issues) to report bugs or request features.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- OpenAI and Anthropic for providing powerful AI APIs
- Playwright and Selenium teams for excellent testing frameworks
- The open-source community for inspiration and contributions
## 📞 Support
- **Documentation**: [browse-to-test.readthedocs.io](https://browse-to-test.readthedocs.io/)
- **Issues**: [GitHub Issues](https://github.com/yourusername/browse-to-test/issues)
- **Discussions**: [GitHub Discussions](https://github.com/yourusername/browse-to-test/discussions)
---
**Made with ❤️ by the Browse-to-Test community**
Raw data
{
"_id": null,
"home_page": "https://github.com/yourusername/browse-to-test",
"name": "browse-to-test",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "test automation, browser testing, playwright, selenium, ai, code generation, testing, qa, end-to-end testing",
"author": "Browse-to-Test Contributors",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/74/72/c80308bc47a9e87d2d192d90772a73c7a0039f239bb5da86cfd1a329b187/browse_to_test-0.2.10.tar.gz",
"platform": null,
"description": "# Browse-to-Test\n\n**AI-Powered Browser Automation to Test Script Converter**\n\nBrowse-to-Test is a Python library that uses AI to convert browser automation data into test scripts for various testing frameworks (Playwright, Selenium, Cypress, etc.). It provides an intelligent, configurable, and extensible way to transform recorded browser interactions into maintainable test code.\n\n## \ud83c\udf1f Features\n\n- **\ud83e\udd16 AI-Powered Conversion**: Uses OpenAI, Anthropic, or other AI providers to intelligently convert automation data\n- **\ud83e\udde0 Context-Aware Generation**: Leverages existing tests, documentation, and system knowledge for intelligent analysis\n- **\ud83d\udd0c Multi-Framework Support**: Generate tests for Playwright, Selenium, Cypress, and more\n- **\ud83c\udfd7\ufe0f Plugin Architecture**: Easily extensible with custom plugins for new frameworks/languages\n- **\u2699\ufe0f Highly Configurable**: Comprehensive configuration system for fine-tuning output\n- **\ud83d\udd0d Smart Analysis**: AI-powered action analysis and optimization with system context\n- **\ud83d\udcda System Intelligence**: Analyzes existing tests, UI components, API endpoints, and documentation\n- **\ud83c\udfaf Pattern Recognition**: Identifies similar tests and reuses established patterns\n- **\ud83d\udd10 Sensitive Data Handling**: Automatic detection and secure handling of sensitive information\n- **\ud83d\udcca Validation & Preview**: Built-in validation and preview capabilities with context insights\n- **\ud83d\ude80 Easy to Use**: Simple API with sensible defaults and intelligent recommendations\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\n# Basic installation\npip install browse-to-test\n\n# With AI providers\npip install browse-to-test[openai,anthropic]\n\n# With testing frameworks \npip install browse-to-test[playwright,selenium]\n\n# Full installation\npip install browse-to-test[all]\n```\n\n### Basic Usage\n\n```python\nimport browse_to_test as btt\n\n# Your browser automation data\nautomation_data = [\n {\n \"model_output\": {\n \"action\": [{\"go_to_url\": {\"url\": \"https://example.com\"}}]\n },\n \"state\": {\"interacted_element\": []}\n },\n # ... more steps\n]\n\n# Convert to Playwright test script - one line!\nscript = btt.convert(automation_data, framework=\"playwright\", ai_provider=\"openai\")\nprint(script)\n```\n\n### Advanced Usage with ConfigBuilder\n\n```python\nimport browse_to_test as btt\n\n# Build custom configuration with fluent interface\nconfig = btt.ConfigBuilder() \\\n .framework(\"playwright\") \\\n .ai_provider(\"openai\", model=\"gpt-4\") \\\n .language(\"python\") \\\n .include_assertions(True) \\\n .include_error_handling(True) \\\n .sensitive_data_keys([\"username\", \"password\"]) \\\n .enable_context_collection() \\\n .thorough_mode() \\\n .build()\n\n# Create converter with custom config\nconverter = btt.E2eTestConverter(config)\nscript = converter.convert(automation_data)\n```\n\n### Incremental Live Generation\n\n```python\nimport browse_to_test as btt\n\n# Start incremental session\nconfig = btt.ConfigBuilder().framework(\"playwright\").build()\nsession = btt.IncrementalSession(config)\n\n# Start session\nresult = session.start(\"https://example.com\")\n\n# Add steps as they happen\nfor step_data in automation_steps:\n result = session.add_step(step_data)\n print(f\"Current script:\\n{result.current_script}\")\n\n# Finalize when done\nfinal = session.finalize()\nprint(f\"Complete test script:\\n{final.current_script}\")\n```\n\n### Context-Aware Generation\n\n```python\nimport browse_to_test as btt\n\n# Enable context-aware features\nconfig = btt.Config(\n ai=btt.AIConfig(\n provider=\"openai\",\n model=\"gpt-4\",\n ),\n output=btt.OutputConfig(\n framework=\"playwright\",\n include_assertions=True,\n include_error_handling=True,\n ),\n processing=btt.ProcessingConfig(\n # Enable intelligent analysis with system context\n analyze_actions_with_ai=True,\n collect_system_context=True,\n use_intelligent_analysis=True,\n \n # Context collection settings\n include_existing_tests=True,\n include_documentation=True,\n include_ui_components=True,\n include_api_endpoints=True,\n \n # Analysis settings\n context_analysis_depth=\"deep\",\n max_similar_tests=5,\n context_similarity_threshold=0.3,\n ),\n project_root=\".\", # Project root for context collection\n verbose=True\n)\n\norchestrator = btt.E2eScriptOrchestrator(config)\n\n# Generate context-aware test script\ntest_script = orchestrator.generate_test_script(\n automation_data=automation_data,\n target_url=\"https://example.com/login\",\n context_hints={\n \"flow_type\": \"authentication\",\n \"critical_elements\": [\"username\", \"password\", \"submit\"]\n }\n)\n\n# Preview with context insights\npreview = orchestrator.preview_conversion(\n automation_data=automation_data,\n target_url=\"https://example.com/login\"\n)\n\nprint(f\"Similar tests found: {len(preview.get('similar_tests', []))}\")\nprint(f\"Context quality score: {preview.get('estimated_quality_score', 0)}\")\n```\n\n## \ud83d\udcd6 Documentation\n\n### Core Concepts\n\n#### 1. Input Data Format\n\nBrowse-to-Test expects browser automation data in a specific JSON format:\n\n```json\n[\n {\n \"model_output\": {\n \"action\": [\n {\n \"action_type\": {\n \"parameter1\": \"value1\",\n \"parameter2\": \"value2\"\n }\n }\n ]\n },\n \"state\": {\n \"interacted_element\": [\n {\n \"xpath\": \"//button[@id='submit']\",\n \"css_selector\": \"button#submit\",\n \"attributes\": {\"id\": \"submit\", \"type\": \"button\"}\n }\n ]\n },\n \"metadata\": {\n \"step_start_time\": 1640995200.0,\n \"elapsed_time\": 2.5\n }\n }\n]\n```\n\n#### 2. Supported Actions\n\n| Action Type | Description | Parameters |\n|-------------|-------------|------------|\n| `go_to_url` | Navigate to URL | `url` |\n| `input_text` | Enter text in field | `text`, `index` |\n| `click_element` | Click an element | `index` |\n| `scroll_down` | Scroll page down | `amount` (optional) |\n| `scroll_up` | Scroll page up | `amount` (optional) |\n| `wait` | Wait for time | `seconds` |\n| `done` | Mark completion | `text`, `success` |\n\n#### 3. Configuration System\n\nThe library uses a hierarchical configuration system:\n\n```python\nconfig = btt.Config(\n # AI Provider Settings\n ai=btt.AIConfig(\n provider=\"openai\", # openai, anthropic, azure, local\n model=\"gpt-4\", # Provider-specific model\n temperature=0.1, # Generation randomness (0-2)\n max_tokens=4000, # Maximum response tokens\n api_key=\"your-key\", # API key (or use env vars)\n ),\n \n # Output Settings\n output=btt.OutputConfig(\n framework=\"playwright\", # Target framework\n language=\"python\", # Target language\n test_type=\"script\", # script, test, spec\n include_assertions=True, # Add test assertions\n include_waits=True, # Add explicit waits\n include_error_handling=True,# Add try-catch blocks\n include_logging=True, # Add logging statements\n sensitive_data_keys=[\"username\", \"password\"],\n mask_sensitive_data=True, # Mask sensitive data\n ),\n \n # Processing Settings \n processing=btt.ProcessingConfig(\n analyze_actions_with_ai=True, # Use AI for analysis\n optimize_selectors=True, # Optimize CSS/XPath selectors\n validate_actions=True, # Validate action feasibility\n strict_mode=False, # Fail on any errors\n \n # Context Collection Settings\n collect_system_context=True, # Enable context collection\n use_intelligent_analysis=True, # Use AI with system context\n include_existing_tests=True, # Analyze existing test files\n include_documentation=True, # Include project documentation\n include_ui_components=True, # Analyze UI component files\n include_api_endpoints=True, # Include API endpoint info\n include_recent_changes=True, # Consider recent git changes\n \n # Context Analysis Settings\n context_analysis_depth=\"deep\", # shallow, medium, deep\n max_similar_tests=5, # Max similar tests to consider\n context_similarity_threshold=0.3, # Similarity threshold (0-1)\n max_context_files=100, # Limit files for performance\n ),\n \n # Global Settings\n debug=False,\n verbose=False,\n log_level=\"INFO\",\n)\n```\n\n### Environment Variables\n\nSet these environment variables for AI providers:\n\n```bash\n# OpenAI\nexport OPENAI_API_KEY=\"your-openai-key\"\n\n# Anthropic\nexport ANTHROPIC_API_KEY=\"your-anthropic-key\"\n\n# Azure OpenAI\nexport AZURE_OPENAI_API_KEY=\"your-azure-key\"\nexport AZURE_OPENAI_ENDPOINT=\"your-azure-endpoint\"\n\n# Browse-to-Test specific\nexport BROWSE_TO_TEST_AI_PROVIDER=\"openai\"\nexport BROWSE_TO_TEST_OUTPUT_FRAMEWORK=\"playwright\"\nexport BROWSE_TO_TEST_DEBUG=\"true\"\n\n# Context-aware features\nexport BROWSE_TO_TEST_PROCESSING_COLLECT_CONTEXT=\"true\"\nexport BROWSE_TO_TEST_PROCESSING_USE_INTELLIGENT_ANALYSIS=\"true\"\nexport BROWSE_TO_TEST_PROCESSING_CONTEXT_ANALYSIS_DEPTH=\"deep\"\n```\n\n## \ud83d\udd0c Plugin System\n\nBrowse-to-Test uses a plugin architecture to support different frameworks and languages.\n\n### Available Plugins\n\n| Plugin | Frameworks | Languages | Status |\n|--------|------------|-----------|--------|\n| Playwright | `playwright` | `python` | \u2705 Stable |\n| Selenium | `selenium`, `webdriver` | `python` | \u2705 Stable |\n| Cypress | `cypress` | `javascript`, `typescript` | \ud83d\udea7 Community |\n\n### Creating Custom Plugins\n\n```python\nfrom browse_to_test.plugins.base import OutputPlugin, GeneratedTestScript\n\nclass MyCustomPlugin(OutputPlugin):\n @property\n def plugin_name(self) -> str:\n return \"my-framework\"\n \n @property \n def supported_frameworks(self) -> List[str]:\n return [\"my-framework\"]\n \n @property\n def supported_languages(self) -> List[str]:\n return [\"python\", \"javascript\"]\n \n def generate_test_script(self, parsed_data, analysis_results=None):\n # Your custom generation logic here\n script_content = self._generate_custom_script(parsed_data)\n \n return GeneratedTestScript(\n content=script_content,\n language=self.config.language,\n framework=self.config.framework,\n )\n\n# Register your plugin\nregistry = btt.PluginRegistry()\nregistry.register_plugin(\"my-framework\", MyCustomPlugin)\n```\n\n## \ud83d\udee0\ufe0f API Reference\n\n### Main Functions\n\n#### `convert_to_test_script(automation_data, output_framework, ai_provider, config=None)`\n\nConvert automation data to test script (convenience function).\n\n**Parameters:**\n- `automation_data`: List of automation steps or path to JSON file\n- `output_framework`: Target framework (\"playwright\", \"selenium\", etc.)\n- `ai_provider`: AI provider (\"openai\", \"anthropic\", etc.)\n- `config`: Optional configuration dictionary\n\n**Returns:** Generated test script as string\n\n#### `list_available_plugins()`\n\nList all available output framework plugins.\n\n**Returns:** List of plugin names\n\n#### `list_available_ai_providers()`\n\nList all available AI providers.\n\n**Returns:** List of provider names\n\n### Core Classes\n\n#### `E2eScriptOrchestrator(config)`\n\nMain orchestrator class that coordinates the conversion process.\n\n**Methods:**\n- `generate_test_script(automation_data, custom_config=None)`: Generate test script\n- `generate_with_multiple_frameworks(automation_data, frameworks)`: Generate for multiple frameworks\n- `validate_configuration()`: Validate current configuration\n- `preview_conversion(automation_data, max_actions=5)`: Preview conversion\n\n#### `Config`, `AIConfig`, `OutputConfig`, `ProcessingConfig`\n\nConfiguration classes for different aspects of the library.\n\n#### `PluginRegistry`\n\nRegistry for managing output plugins.\n\n**Methods:**\n- `register_plugin(name, plugin_class)`: Register a new plugin\n- `create_plugin(config)`: Create plugin instance\n- `list_available_plugins()`: List available plugins\n\n## \ud83d\udcca Examples\n\n### Generate Multiple Frameworks\n\n```python\norchestrator = btt.E2eScriptOrchestrator(config)\nscripts = orchestrator.generate_with_multiple_frameworks(\n automation_data, \n [\"playwright\", \"selenium\"]\n)\n\nfor framework, script in scripts.items():\n with open(f\"test_{framework}.py\", \"w\") as f:\n f.write(script)\n```\n\n### Load from File\n\n```python\n# Save automation data to file\nwith open(\"automation_data.json\", \"w\") as f:\n json.dump(automation_data, f)\n\n# Load and convert from file\nscript = btt.convert_to_test_script(\n automation_data=\"automation_data.json\",\n output_framework=\"playwright\"\n)\n```\n\n### Custom Configuration\n\n```python\nconfig = {\n \"ai\": {\"provider\": \"anthropic\", \"model\": \"claude-3-sonnet\"},\n \"output\": {\"include_screenshots\": True, \"add_comments\": True},\n \"processing\": {\"analyze_actions_with_ai\": False}\n}\n\nscript = btt.convert_to_test_script(\n automation_data=data,\n output_framework=\"selenium\", \n config=config\n)\n```\n\n### Preview Before Generation\n\n```python\norchestrator = btt.E2eScriptOrchestrator(config)\npreview = orchestrator.preview_conversion(automation_data)\n\nprint(f\"Total steps: {preview['total_steps']}\")\nprint(f\"Total actions: {preview['total_actions']}\")\nprint(f\"Action types: {preview['action_types']}\")\nprint(f\"Validation issues: {preview['validation_issues']}\")\nprint(f\"Similar tests: {len(preview.get('similar_tests', []))}\")\nprint(f\"Quality score: {preview.get('estimated_quality_score', 0)}\")\n```\n\n## \ud83e\udde0 Context-Aware Features\n\nBrowse-to-Test includes powerful context-aware capabilities that analyze your existing codebase to generate more intelligent and consistent test scripts.\n\n### How It Works\n\n1. **Context Collection**: Scans your project for existing tests, documentation, UI components, and API endpoints\n2. **Pattern Analysis**: Uses AI to understand your project's testing conventions and patterns\n3. **Intelligent Generation**: Leverages this context to generate tests that align with your existing codebase\n4. **Similarity Detection**: Identifies similar existing tests to avoid duplication and ensure consistency\n\n### What Gets Analyzed\n\n| Component | Description | Benefits |\n|-----------|-------------|----------|\n| **Existing Tests** | Playwright, Selenium, Cypress test files | Consistent selector patterns, test structure |\n| **Documentation** | README, API docs, contributing guides | Project-specific terminology and conventions |\n| **UI Components** | React, Vue, Angular component files | Component props, data attributes, CSS classes |\n| **API Endpoints** | Route definitions, OpenAPI specs | Endpoint patterns, authentication flows |\n| **Recent Changes** | Git history and recent commits | Awareness of recent code changes |\n\n### Context-Aware Benefits\n\n\u2728 **Consistency**: Generated tests follow your project's established patterns and conventions\n\n\ud83c\udfaf **Intelligence**: AI understands your specific domain, components, and testing strategies \n\n\ud83d\udd0d **Similarity Detection**: Avoids duplicating existing test coverage\n\n\u26a1 **Optimized Selectors**: Uses selectors that match your project's preferred patterns (data-testid, CSS classes, etc.)\n\n\ud83d\udee1\ufe0f **Smart Defaults**: Automatically configures sensitive data handling and test setup based on existing tests\n\n\ud83d\udcca **Quality Insights**: Provides quality scores and recommendations based on project analysis\n\n### Configuration Options\n\n```python\nprocessing=btt.ProcessingConfig(\n # Enable context features\n collect_system_context=True,\n use_intelligent_analysis=True,\n \n # Control what gets analyzed\n include_existing_tests=True,\n include_documentation=True,\n include_ui_components=True,\n include_api_endpoints=True,\n include_database_schema=False, # More expensive\n include_recent_changes=True,\n \n # Fine-tune analysis\n context_analysis_depth=\"deep\", # shallow, medium, deep\n max_similar_tests=5,\n context_similarity_threshold=0.3,\n max_context_files=100,\n \n # Performance settings\n context_cache_ttl=3600, # Cache for 1 hour\n max_context_prompt_size=8000,\n)\n```\n\n### Performance Optimization\n\nFor faster generation, use speed-optimized settings:\n\n```python\nconfig = btt.Config()\nconfig.optimize_for_speed() # Disables heavy context analysis\n\n# Or manually configure\nconfig.processing.collect_system_context = False\nconfig.processing.context_analysis_depth = \"shallow\"\nconfig.processing.max_context_files = 20\n```\n\nFor maximum accuracy, use accuracy-optimized settings:\n\n```python\nconfig = btt.Config()\nconfig.optimize_for_accuracy() # Enables deep context analysis\n\n# Or manually configure \nconfig.processing.context_analysis_depth = \"deep\"\nconfig.processing.max_context_files = 200\nconfig.ai.max_tokens = 8000\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Here's how to get started:\n\n### Development Setup\n\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/browse-to-test.git\ncd browse-to-test\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate # On Windows: venv\\Scripts\\activate\n\n# Install development dependencies\npip install -e .[dev,all]\n\n# Run tests\npytest\n\n# Format code\nblack browse_to_test/\nisort browse_to_test/\n\n# Type checking\nmypy browse_to_test/\n```\n\n### Creating a Plugin\n\n1. Create a new plugin class inheriting from `OutputPlugin`\n2. Implement required methods (`plugin_name`, `supported_frameworks`, etc.)\n3. Add tests for your plugin\n4. Submit a pull request\n\n### Reporting Issues\n\nPlease use the [GitHub issue tracker](https://github.com/yourusername/browse-to-test/issues) to report bugs or request features.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- OpenAI and Anthropic for providing powerful AI APIs\n- Playwright and Selenium teams for excellent testing frameworks\n- The open-source community for inspiration and contributions\n\n## \ud83d\udcde Support\n\n- **Documentation**: [browse-to-test.readthedocs.io](https://browse-to-test.readthedocs.io/)\n- **Issues**: [GitHub Issues](https://github.com/yourusername/browse-to-test/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/yourusername/browse-to-test/discussions)\n\n---\n\n**Made with \u2764\ufe0f by the Browse-to-Test community** \n",
"bugtrack_url": null,
"license": null,
"summary": "AI-powered browser automation to test script converter",
"version": "0.2.10",
"project_urls": {
"Bug Reports": "https://github.com/yourusername/browse-to-test/issues",
"Documentation": "https://browse-to-test.readthedocs.io/",
"Homepage": "https://github.com/yourusername/browse-to-test",
"Source": "https://github.com/yourusername/browse-to-test"
},
"split_keywords": [
"test automation",
" browser testing",
" playwright",
" selenium",
" ai",
" code generation",
" testing",
" qa",
" end-to-end testing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "267530e0722fb3e7948da7cff8ff54eb4b0493eb97feafd0ffee4d7c911ecdd3",
"md5": "0996ac2163f4a52d9c089a92a850d667",
"sha256": "40a808b3366c645732d933c9f0109bf9a10380be10acf1e040f72185addee3e6"
},
"downloads": -1,
"filename": "browse_to_test-0.2.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0996ac2163f4a52d9c089a92a850d667",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 131966,
"upload_time": "2025-07-26T01:10:44",
"upload_time_iso_8601": "2025-07-26T01:10:44.771922Z",
"url": "https://files.pythonhosted.org/packages/26/75/30e0722fb3e7948da7cff8ff54eb4b0493eb97feafd0ffee4d7c911ecdd3/browse_to_test-0.2.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7472c80308bc47a9e87d2d192d90772a73c7a0039f239bb5da86cfd1a329b187",
"md5": "a334a46a325b8ad28f3392cf19395687",
"sha256": "8d8d7feff3ffd47f5a2374850a03df1f38cefc65f59b96c9f2343720a89bca12"
},
"downloads": -1,
"filename": "browse_to_test-0.2.10.tar.gz",
"has_sig": false,
"md5_digest": "a334a46a325b8ad28f3392cf19395687",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 213703,
"upload_time": "2025-07-26T01:10:46",
"upload_time_iso_8601": "2025-07-26T01:10:46.040692Z",
"url": "https://files.pythonhosted.org/packages/74/72/c80308bc47a9e87d2d192d90772a73c7a0039f239bb5da86cfd1a329b187/browse_to_test-0.2.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-26 01:10:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yourusername",
"github_project": "browse-to-test",
"github_not_found": true,
"lcname": "browse-to-test"
}