# FreeCrawl MCP Server
A production-ready Model Context Protocol (MCP) server for web scraping and document processing, designed as a self-hosted replacement for Firecrawl.
## 🚀 Features
- **JavaScript-enabled web scraping** with Playwright and anti-detection measures
- **Document processing** with fallback support for various formats
- **Concurrent batch processing** with configurable limits
- **Intelligent caching** with SQLite backend
- **Rate limiting** per domain
- **Comprehensive error handling** with retry logic
- **Easy installation** via `uvx` or local development setup
- **Health monitoring** and metrics collection
## MCP Config (using `uvx`)
```json
{
"mcpServers": {
"freecrawl": {
"command": "uvx",
"args": ["freecrawl-mcp"],
}
}
}
```
## 📦 Installation & Usage
### Quick Start with uvx (Recommended)
The easiest way to use FreeCrawl is with `uvx`, which automatically manages dependencies:
```bash
# Install browsers on first run
uvx freecrawl-mcp --install-browsers
# Test functionality
uvx freecrawl-mcp --test
```
### Local Development Setup
For local development or customization:
1. **Clone from GitHub:**
```bash
git clone https://github.com/dylan-gluck/freecrawl-mcp.git
cd freecrawl-mcp
```
2. **Set up environment:**
```bash
# Sync dependencies
uv sync
# Install browser dependencies
uv run freecrawl-mcp --install-browsers
# Run tests
uv run freecrawl-mcp --test
```
3. **Run the server:**
```bash
uv run freecrawl-mcp
```
## 🛠 Configuration
Configure FreeCrawl using environment variables:
### Basic Configuration
```bash
# Transport (stdio for MCP, http for REST API)
export FREECRAWL_TRANSPORT=stdio
# Browser pool settings
export FREECRAWL_MAX_BROWSERS=3
export FREECRAWL_HEADLESS=true
# Concurrency limits
export FREECRAWL_MAX_CONCURRENT=10
export FREECRAWL_MAX_PER_DOMAIN=3
# Cache settings
export FREECRAWL_CACHE=true
export FREECRAWL_CACHE_DIR=/tmp/freecrawl_cache
export FREECRAWL_CACHE_TTL=3600
export FREECRAWL_CACHE_SIZE=536870912 # 512MB
# Rate limiting
export FREECRAWL_RATE_LIMIT=60 # requests per minute
# Logging
export FREECRAWL_LOG_LEVEL=INFO
```
### Security Settings
```bash
# API authentication (optional)
export FREECRAWL_REQUIRE_API_KEY=false
export FREECRAWL_API_KEYS=key1,key2,key3
# Domain blocking
export FREECRAWL_BLOCKED_DOMAINS=localhost,127.0.0.1
# Anti-detection
export FREECRAWL_ANTI_DETECT=true
export FREECRAWL_ROTATE_UA=true
```
## 🔧 MCP Tools
FreeCrawl provides the following MCP tools:
### `freecrawl_scrape`
Scrape content from a single URL with advanced options.
**Parameters:**
- `url` (string): URL to scrape
- `formats` (array): Output formats - `["markdown", "html", "text", "screenshot", "structured"]`
- `javascript` (boolean): Enable JavaScript execution (default: true)
- `wait_for` (string, optional): CSS selector or time (ms) to wait
- `anti_bot` (boolean): Enable anti-detection measures (default: true)
- `headers` (object, optional): Custom HTTP headers
- `cookies` (object, optional): Custom cookies
- `cache` (boolean): Use cached results if available (default: true)
- `timeout` (number): Total timeout in milliseconds (default: 30000)
**Example:**
```json
{
"name": "freecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown", "screenshot"],
"javascript": true,
"wait_for": "2000"
}
}
```
### `freecrawl_batch_scrape`
Scrape multiple URLs concurrently.
**Parameters:**
- `urls` (array): List of URLs to scrape (max 100)
- `concurrency` (number): Maximum concurrent requests (default: 5)
- `formats` (array): Output formats (default: `["markdown"]`)
- `common_options` (object, optional): Options applied to all URLs
- `continue_on_error` (boolean): Continue if individual URLs fail (default: true)
**Example:**
```json
{
"name": "freecrawl_batch_scrape",
"arguments": {
"urls": [
"https://example.com/page1",
"https://example.com/page2"
],
"concurrency": 3,
"formats": ["markdown", "text"]
}
}
```
### `freecrawl_extract`
Extract structured data using schema-driven approach.
**Parameters:**
- `url` (string): URL to extract data from
- `schema` (object): JSON Schema or Pydantic model definition
- `prompt` (string, optional): Custom extraction instructions
- `validation` (boolean): Validate against schema (default: true)
- `multiple` (boolean): Extract multiple matching items (default: false)
**Example:**
```json
{
"name": "freecrawl_extract",
"arguments": {
"url": "https://example.com/product",
"schema": {
"type": "object",
"properties": {
"title": {"type": "string"},
"price": {"type": "number"}
}
}
}
}
```
### `freecrawl_process_document`
Process documents (PDF, DOCX, etc.) with OCR support.
**Parameters:**
- `file_path` (string, optional): Path to document file
- `url` (string, optional): URL to download document from
- `strategy` (string): Processing strategy - `"fast"`, `"hi_res"`, `"ocr_only"` (default: "hi_res")
- `formats` (array): Output formats - `["markdown", "structured", "text"]`
- `languages` (array, optional): OCR languages (e.g., `["eng", "fra"]`)
- `extract_images` (boolean): Extract embedded images (default: false)
- `extract_tables` (boolean): Extract and structure tables (default: true)
**Example:**
```json
{
"name": "freecrawl_process_document",
"arguments": {
"url": "https://example.com/document.pdf",
"strategy": "hi_res",
"formats": ["markdown", "structured"]
}
}
```
### `freecrawl_health_check`
Get server health status and metrics.
**Example:**
```json
{
"name": "freecrawl_health_check",
"arguments": {}
}
```
## 🔄 Integration with Claude Code
### MCP Configuration
Add FreeCrawl to your MCP configuration:
**Using uvx (Recommended):**
```json
{
"mcpServers": {
"freecrawl": {
"command": "uvx",
"args": ["freecrawl-mcp"]
}
}
}
```
**Using local development setup:**
```json
{
"mcpServers": {
"freecrawl": {
"command": "uv",
"args": ["run", "freecrawl-mcp"],
"cwd": "/path/to/freecrawl-mcp"
}
}
}
```
### Usage in Prompts
```
Please scrape the content from https://example.com and extract the main article text in markdown format.
```
Claude Code will automatically use the `freecrawl_scrape` tool to fetch and process the content.
## 🚀 Performance & Scalability
### Resource Usage
- **Memory**: ~100MB base + ~50MB per browser instance
- **CPU**: Moderate usage during active scraping
- **Storage**: Cache grows based on configured limits
### Throughput
- **Single requests**: 2-5 seconds typical response time
- **Batch processing**: 10-50 concurrent requests depending on configuration
- **Cache hit ratio**: 30%+ for repeated content
### Optimization Tips
1. **Enable caching** for frequently accessed content
2. **Adjust concurrency** based on target site rate limits
3. **Use appropriate formats** - markdown is faster than screenshots
4. **Configure rate limiting** to avoid being blocked
## 🛡 Security Considerations
### Anti-Detection
- Rotating user agents
- Realistic browser fingerprints
- Request timing randomization
- JavaScript execution in sandboxed environment
### Input Validation
- URL format validation
- Private IP blocking
- Domain blocklist support
- Request size limits
### Resource Protection
- Memory usage monitoring
- Browser pool size limits
- Request timeout enforcement
- Rate limiting per domain
## 🔧 Troubleshooting
### Common Issues
| Issue | Possible Cause | Solution |
|-------|----------------|----------|
| High memory usage | Too many browser instances | Reduce `FREECRAWL_MAX_BROWSERS` |
| Slow responses | JavaScript-heavy sites | Increase timeout or disable JS |
| Bot detection | Missing anti-detection | Ensure `FREECRAWL_ANTI_DETECT=true` |
| Cache misses | TTL too short | Increase `FREECRAWL_CACHE_TTL` |
| Import errors | Missing dependencies | Run `uvx freecrawl-mcp --test` |
### Debug Mode
**With uvx:**
```bash
export FREECRAWL_LOG_LEVEL=DEBUG
uvx freecrawl-mcp --test
```
**Local development:**
```bash
export FREECRAWL_LOG_LEVEL=DEBUG
uv run freecrawl-mcp --test
```
## 📈 Monitoring & Observability
### Health Metrics
- Browser pool status
- Memory and CPU usage
- Cache hit rates
- Request success rates
- Response times
### Logging
FreeCrawl provides structured logging with configurable levels:
- ERROR: Critical failures
- WARNING: Recoverable issues
- INFO: General operations
- DEBUG: Detailed troubleshooting
## 🔧 Development
### Running Tests
**With uvx:**
```bash
# Basic functionality test
uvx freecrawl-mcp --test
```
**Local development:**
```bash
# Basic functionality test
uv run freecrawl-mcp --test
```
### Code Structure
- **Core server**: `FreeCrawlServer` class
- **Browser management**: `BrowserPool` for resource pooling
- **Content extraction**: `ContentExtractor` with multiple strategies
- **Caching**: `CacheManager` with SQLite backend
- **Rate limiting**: `RateLimiter` with token bucket algorithm
## 📄 License
This project is licensed under the MIT License - see the technical specification for details.
## 🤝 Contributing
1. Fork the repository at https://github.com/dylan-gluck/freecrawl-mcp
2. Create a feature branch
3. Set up local development: `uv sync`
4. Run tests: `uv run freecrawl-mcp --test`
5. Submit a pull request
## 📚 Technical Specification
For detailed technical information, see `ai_docs/FREECRAWL_TECHNICAL_SPEC.md`.
---
**FreeCrawl MCP Server** - Self-hosted web scraping for the modern web 🚀
Raw data
{
"_id": null,
"home_page": null,
"name": "freecrawl-mcp",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "document-processing, firecrawl, mcp, playwright, web-scraping",
"author": null,
"author_email": "Dylan Gluck <dylan@dylangluck.com>",
"download_url": "https://files.pythonhosted.org/packages/08/9c/92337fb12909f126078dab7cb8fef7441dd0cfa2f2f569b865478d9e48ad/freecrawl_mcp-0.1.2.tar.gz",
"platform": null,
"description": "# FreeCrawl MCP Server\n\nA production-ready Model Context Protocol (MCP) server for web scraping and document processing, designed as a self-hosted replacement for Firecrawl.\n\n## \ud83d\ude80 Features\n\n- **JavaScript-enabled web scraping** with Playwright and anti-detection measures\n- **Document processing** with fallback support for various formats\n- **Concurrent batch processing** with configurable limits\n- **Intelligent caching** with SQLite backend\n- **Rate limiting** per domain\n- **Comprehensive error handling** with retry logic\n- **Easy installation** via `uvx` or local development setup\n- **Health monitoring** and metrics collection\n\n## MCP Config (using `uvx`)\n\n```json\n{\n \"mcpServers\": {\n \"freecrawl\": {\n \"command\": \"uvx\",\n \"args\": [\"freecrawl-mcp\"],\n }\n }\n}\n```\n\n## \ud83d\udce6 Installation & Usage\n\n### Quick Start with uvx (Recommended)\n\nThe easiest way to use FreeCrawl is with `uvx`, which automatically manages dependencies:\n\n```bash\n# Install browsers on first run\nuvx freecrawl-mcp --install-browsers\n\n# Test functionality\nuvx freecrawl-mcp --test\n```\n\n### Local Development Setup\n\nFor local development or customization:\n\n1. **Clone from GitHub:**\n ```bash\n git clone https://github.com/dylan-gluck/freecrawl-mcp.git\n cd freecrawl-mcp\n ```\n\n2. **Set up environment:**\n ```bash\n # Sync dependencies\n uv sync\n\n # Install browser dependencies\n uv run freecrawl-mcp --install-browsers\n\n # Run tests\n uv run freecrawl-mcp --test\n ```\n\n3. **Run the server:**\n ```bash\n uv run freecrawl-mcp\n ```\n\n## \ud83d\udee0 Configuration\n\nConfigure FreeCrawl using environment variables:\n\n### Basic Configuration\n```bash\n# Transport (stdio for MCP, http for REST API)\nexport FREECRAWL_TRANSPORT=stdio\n\n# Browser pool settings\nexport FREECRAWL_MAX_BROWSERS=3\nexport FREECRAWL_HEADLESS=true\n\n# Concurrency limits\nexport FREECRAWL_MAX_CONCURRENT=10\nexport FREECRAWL_MAX_PER_DOMAIN=3\n\n# Cache settings\nexport FREECRAWL_CACHE=true\nexport FREECRAWL_CACHE_DIR=/tmp/freecrawl_cache\nexport FREECRAWL_CACHE_TTL=3600\nexport FREECRAWL_CACHE_SIZE=536870912 # 512MB\n\n# Rate limiting\nexport FREECRAWL_RATE_LIMIT=60 # requests per minute\n\n# Logging\nexport FREECRAWL_LOG_LEVEL=INFO\n```\n\n### Security Settings\n```bash\n# API authentication (optional)\nexport FREECRAWL_REQUIRE_API_KEY=false\nexport FREECRAWL_API_KEYS=key1,key2,key3\n\n# Domain blocking\nexport FREECRAWL_BLOCKED_DOMAINS=localhost,127.0.0.1\n\n# Anti-detection\nexport FREECRAWL_ANTI_DETECT=true\nexport FREECRAWL_ROTATE_UA=true\n```\n\n## \ud83d\udd27 MCP Tools\n\nFreeCrawl provides the following MCP tools:\n\n### `freecrawl_scrape`\nScrape content from a single URL with advanced options.\n\n**Parameters:**\n- `url` (string): URL to scrape\n- `formats` (array): Output formats - `[\"markdown\", \"html\", \"text\", \"screenshot\", \"structured\"]`\n- `javascript` (boolean): Enable JavaScript execution (default: true)\n- `wait_for` (string, optional): CSS selector or time (ms) to wait\n- `anti_bot` (boolean): Enable anti-detection measures (default: true)\n- `headers` (object, optional): Custom HTTP headers\n- `cookies` (object, optional): Custom cookies\n- `cache` (boolean): Use cached results if available (default: true)\n- `timeout` (number): Total timeout in milliseconds (default: 30000)\n\n**Example:**\n```json\n{\n \"name\": \"freecrawl_scrape\",\n \"arguments\": {\n \"url\": \"https://example.com\",\n \"formats\": [\"markdown\", \"screenshot\"],\n \"javascript\": true,\n \"wait_for\": \"2000\"\n }\n}\n```\n\n### `freecrawl_batch_scrape`\nScrape multiple URLs concurrently.\n\n**Parameters:**\n- `urls` (array): List of URLs to scrape (max 100)\n- `concurrency` (number): Maximum concurrent requests (default: 5)\n- `formats` (array): Output formats (default: `[\"markdown\"]`)\n- `common_options` (object, optional): Options applied to all URLs\n- `continue_on_error` (boolean): Continue if individual URLs fail (default: true)\n\n**Example:**\n```json\n{\n \"name\": \"freecrawl_batch_scrape\",\n \"arguments\": {\n \"urls\": [\n \"https://example.com/page1\",\n \"https://example.com/page2\"\n ],\n \"concurrency\": 3,\n \"formats\": [\"markdown\", \"text\"]\n }\n}\n```\n\n### `freecrawl_extract`\nExtract structured data using schema-driven approach.\n\n**Parameters:**\n- `url` (string): URL to extract data from\n- `schema` (object): JSON Schema or Pydantic model definition\n- `prompt` (string, optional): Custom extraction instructions\n- `validation` (boolean): Validate against schema (default: true)\n- `multiple` (boolean): Extract multiple matching items (default: false)\n\n**Example:**\n```json\n{\n \"name\": \"freecrawl_extract\",\n \"arguments\": {\n \"url\": \"https://example.com/product\",\n \"schema\": {\n \"type\": \"object\",\n \"properties\": {\n \"title\": {\"type\": \"string\"},\n \"price\": {\"type\": \"number\"}\n }\n }\n }\n}\n```\n\n### `freecrawl_process_document`\nProcess documents (PDF, DOCX, etc.) with OCR support.\n\n**Parameters:**\n- `file_path` (string, optional): Path to document file\n- `url` (string, optional): URL to download document from\n- `strategy` (string): Processing strategy - `\"fast\"`, `\"hi_res\"`, `\"ocr_only\"` (default: \"hi_res\")\n- `formats` (array): Output formats - `[\"markdown\", \"structured\", \"text\"]`\n- `languages` (array, optional): OCR languages (e.g., `[\"eng\", \"fra\"]`)\n- `extract_images` (boolean): Extract embedded images (default: false)\n- `extract_tables` (boolean): Extract and structure tables (default: true)\n\n**Example:**\n```json\n{\n \"name\": \"freecrawl_process_document\",\n \"arguments\": {\n \"url\": \"https://example.com/document.pdf\",\n \"strategy\": \"hi_res\",\n \"formats\": [\"markdown\", \"structured\"]\n }\n}\n```\n\n### `freecrawl_health_check`\nGet server health status and metrics.\n\n**Example:**\n```json\n{\n \"name\": \"freecrawl_health_check\",\n \"arguments\": {}\n}\n```\n\n## \ud83d\udd04 Integration with Claude Code\n\n### MCP Configuration\n\nAdd FreeCrawl to your MCP configuration:\n\n**Using uvx (Recommended):**\n```json\n{\n \"mcpServers\": {\n \"freecrawl\": {\n \"command\": \"uvx\",\n \"args\": [\"freecrawl-mcp\"]\n }\n }\n}\n```\n\n**Using local development setup:**\n```json\n{\n \"mcpServers\": {\n \"freecrawl\": {\n \"command\": \"uv\",\n \"args\": [\"run\", \"freecrawl-mcp\"],\n \"cwd\": \"/path/to/freecrawl-mcp\"\n }\n }\n}\n```\n\n### Usage in Prompts\n\n```\nPlease scrape the content from https://example.com and extract the main article text in markdown format.\n```\n\nClaude Code will automatically use the `freecrawl_scrape` tool to fetch and process the content.\n\n## \ud83d\ude80 Performance & Scalability\n\n### Resource Usage\n- **Memory**: ~100MB base + ~50MB per browser instance\n- **CPU**: Moderate usage during active scraping\n- **Storage**: Cache grows based on configured limits\n\n### Throughput\n- **Single requests**: 2-5 seconds typical response time\n- **Batch processing**: 10-50 concurrent requests depending on configuration\n- **Cache hit ratio**: 30%+ for repeated content\n\n### Optimization Tips\n1. **Enable caching** for frequently accessed content\n2. **Adjust concurrency** based on target site rate limits\n3. **Use appropriate formats** - markdown is faster than screenshots\n4. **Configure rate limiting** to avoid being blocked\n\n## \ud83d\udee1 Security Considerations\n\n### Anti-Detection\n- Rotating user agents\n- Realistic browser fingerprints\n- Request timing randomization\n- JavaScript execution in sandboxed environment\n\n### Input Validation\n- URL format validation\n- Private IP blocking\n- Domain blocklist support\n- Request size limits\n\n### Resource Protection\n- Memory usage monitoring\n- Browser pool size limits\n- Request timeout enforcement\n- Rate limiting per domain\n\n## \ud83d\udd27 Troubleshooting\n\n### Common Issues\n\n| Issue | Possible Cause | Solution |\n|-------|----------------|----------|\n| High memory usage | Too many browser instances | Reduce `FREECRAWL_MAX_BROWSERS` |\n| Slow responses | JavaScript-heavy sites | Increase timeout or disable JS |\n| Bot detection | Missing anti-detection | Ensure `FREECRAWL_ANTI_DETECT=true` |\n| Cache misses | TTL too short | Increase `FREECRAWL_CACHE_TTL` |\n| Import errors | Missing dependencies | Run `uvx freecrawl-mcp --test` |\n\n### Debug Mode\n\n**With uvx:**\n```bash\nexport FREECRAWL_LOG_LEVEL=DEBUG\nuvx freecrawl-mcp --test\n```\n\n**Local development:**\n```bash\nexport FREECRAWL_LOG_LEVEL=DEBUG\nuv run freecrawl-mcp --test\n```\n\n## \ud83d\udcc8 Monitoring & Observability\n\n### Health Metrics\n- Browser pool status\n- Memory and CPU usage\n- Cache hit rates\n- Request success rates\n- Response times\n\n### Logging\nFreeCrawl provides structured logging with configurable levels:\n- ERROR: Critical failures\n- WARNING: Recoverable issues\n- INFO: General operations\n- DEBUG: Detailed troubleshooting\n\n## \ud83d\udd27 Development\n\n### Running Tests\n\n**With uvx:**\n```bash\n# Basic functionality test\nuvx freecrawl-mcp --test\n```\n\n**Local development:**\n```bash\n# Basic functionality test\nuv run freecrawl-mcp --test\n```\n\n### Code Structure\n- **Core server**: `FreeCrawlServer` class\n- **Browser management**: `BrowserPool` for resource pooling\n- **Content extraction**: `ContentExtractor` with multiple strategies\n- **Caching**: `CacheManager` with SQLite backend\n- **Rate limiting**: `RateLimiter` with token bucket algorithm\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the technical specification for details.\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository at https://github.com/dylan-gluck/freecrawl-mcp\n2. Create a feature branch\n3. Set up local development: `uv sync`\n4. Run tests: `uv run freecrawl-mcp --test`\n5. Submit a pull request\n\n## \ud83d\udcda Technical Specification\n\nFor detailed technical information, see `ai_docs/FREECRAWL_TECHNICAL_SPEC.md`.\n\n---\n\n**FreeCrawl MCP Server** - Self-hosted web scraping for the modern web \ud83d\ude80\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "FreeCrawl MCP Server - Self-hosted web scraping and document processing as a Firecrawl replacement",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/dylan-gluck/freecrawl-mcp",
"Issues": "https://github.com/dylan-gluck/freecrawl-mcp/issues",
"Repository": "https://github.com/dylan-gluck/freecrawl-mcp"
},
"split_keywords": [
"document-processing",
" firecrawl",
" mcp",
" playwright",
" web-scraping"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ebcac2bae0939ce27401450fd8909eae190b5c5a51dcb40c2da22d210d4cce58",
"md5": "b4c31dbcc5f141a6fcba363f97596c4c",
"sha256": "53194d2522052e9bd57842cbba2e953f06af539961bad590f5dfb3d36e684d96"
},
"downloads": -1,
"filename": "freecrawl_mcp-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b4c31dbcc5f141a6fcba363f97596c4c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 23781,
"upload_time": "2025-08-20T03:35:36",
"upload_time_iso_8601": "2025-08-20T03:35:36.018263Z",
"url": "https://files.pythonhosted.org/packages/eb/ca/c2bae0939ce27401450fd8909eae190b5c5a51dcb40c2da22d210d4cce58/freecrawl_mcp-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "089c92337fb12909f126078dab7cb8fef7441dd0cfa2f2f569b865478d9e48ad",
"md5": "e0d438da09a9739dbc1330cdf16558e5",
"sha256": "16ecea1e0d444a6ef13ddcd372ff3859f57aee1f885bfb2510aaeffea28bd3e0"
},
"downloads": -1,
"filename": "freecrawl_mcp-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "e0d438da09a9739dbc1330cdf16558e5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 95724,
"upload_time": "2025-08-20T03:35:37",
"upload_time_iso_8601": "2025-08-20T03:35:37.003316Z",
"url": "https://files.pythonhosted.org/packages/08/9c/92337fb12909f126078dab7cb8fef7441dd0cfa2f2f569b865478d9e48ad/freecrawl_mcp-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-20 03:35:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dylan-gluck",
"github_project": "freecrawl-mcp",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "freecrawl-mcp"
}