# ScraperAPI MCP server
The ScraperAPI MCP server enables LLM clients to retrieve and process web scraping requests using the ScraperAPI services.
## Table of Contents
- [Features](#features)
- [Architecture](#architecture)
- [Installation](#installation)
- [API Reference](#api-reference)
- [Configuration](#configuration)
- [Development](#development)
## Features
- Full implementation of the Model Context Protocol specification
- Seamless integration with ScraperAPI for web scraping
- Simple setup with Docker or Python
## Architecture
```
┌───────────────┐ ┌───────────────────────┐ ┌───────────────┐
│ LLM Client │────▶│ Scraper MCP Server │────▶│ AI Model │
└───────────────┘ └───────────────────────┘ └───────────────┘
│
▼
┌──────────────────┐
│ ScraperAPI API │
└──────────────────┘
```
## Installation
The ScraperAPI MCP Server is designed to run as a local server on your machine, your LLM client will launch it automatically when configured.
### Prerequisites
- Python 3.11+
- Docker (optional)
### Using Python
Add this to your client configuration file:
```json
{
"mcpServers": {
"ScraperAPI": {
"command": "python",
"args": ["-m", "scraperapi_mcp_server"],
"env": {
"API_KEY": "<YOUR_SCRAPERAPI_API_KEY>"
}
}
}
}
```
### Using Docker
Add this to your client configuration file:
```json
{
"mcpServers": {
"ScraperAPI": {
"command": "docker",
"args": [
"run",
"-i",
"-e",
"API_KEY=${API_KEY}",
"--rm",
"scraperapi-mcp-server"]
}
}
}
```
</br>
> [!TIP]
>
> If your command is not working (for example, you see a `package not found` error when trying to start the server), double-check the path you are using. To find the correct path, activate your virtual environment first, then run:
> ```bash
> which <YOUR_COMMAND>
> ```
## API Reference
### Available Tools
- `scrape`
- Scrape a URL from the internet using ScraperAPI
- Parameters:
- `url` (string, required): URL to scrape
- `render` (boolean, optional): Whether to render the page using JavaScript. Defaults to `False`. Set to `True` only if the page requires JavaScript rendering to display its content.
- `country_code` (string, optional): Activate country geotargeting (ISO 2-letter code)
- `premium` (boolean, optional): Activate premium residential and mobile IPs
- `ultra_premium` (boolean, optional): Activate advanced bypass mechanisms. Can not combine with `premium`
- `device_type` (string, optional): Set request to use `mobile` or `desktop` user agents
- Returns: The scraped content as a string
### Prompt templates
- Please scrape this URL `<URL>`. If you receive a 500 server error identify the website's geo-targeting and add the corresponding country_code to overcome geo-restrictions. If errors continues, upgrade the request to use premium proxies by adding premium=true. For persistent failures, activate ultra_premium=true to use enhanced anti-blocking measures.
- Can you scrape URL `<URL>` to extract `<SPECIFIC_DATA>`? If the request returns missing/incomplete`<SPECIFIC_DATA>`, set render=true to enable JS Rendering.
## Configuration
### Settings
- `API_KEY`: Your ScraperAPI API key.
### Configure for Claude Desktop App
1. Open Claude Desktop Application
2. Access the Settings Menu
3. Click on the settings icon (typically a gear or three dots in the upper right corner)
4. Select the "Developer" tab
5. Click on "Edit Config" and paste [the JSON configuration file](#installation).
## Development
### Local setup
1. **Clone the repository:**
```bash
git clone https://github.com/scraperapi/scraperapi-mcp
cd scraperapi-mcp
```
2. **Install dependencies and run the package locally:**
- **Using Python:**
```bash
# Create virtual environment and activate it
python -m venv .venv
source .venv/bin/activate # MacOS/Linux
# OR
.venv/Scripts/activate # Windows
# Install the local package in editable mode
pip install -e .
```
- **Using Docker:**
```bash
# Build the Docker image locally
docker build -t scraperapi-mcp-server .
```
### Run the server
- **Using Python:**
```bash
python -m scraperapi_mcp_server
```
- **Using Docker:**
```bash
# Run the Docker container with your API key
docker run -e API_KEY=<YOUR_SCRAPERAPI_API_KEY> scraperapi-mcp-server
```
### Debug
```bash
python3 -m scraperapi_mcp_server --debug
```
### Testing
This project uses [pytest](https://docs.pytest.org/en/stable/) for testing.
#### Install Test Dependencies
```bash
# Install the package with test dependencies
pip install -e ".[test]"
```
#### Running Tests
```bash
# Run All Tests
pytest
# Run Specific Test
pytest <TEST_FILE_PATH>
```
Raw data
{
"_id": null,
"home_page": null,
"name": "scraperapi-mcp-server",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "http, mcp, llm, automation, scraping",
"author": "ScraperAPI",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/3f/37/7a5e613e1c72184d35b46dccdec96747bdfa6c4f6dba0934f89220ca3d51/scraperapi_mcp_server-0.1.0.tar.gz",
"platform": null,
"description": "# ScraperAPI MCP server\n\nThe ScraperAPI MCP server enables LLM clients to retrieve and process web scraping requests using the ScraperAPI services.\n\n## Table of Contents\n\n- [Features](#features)\n- [Architecture](#architecture)\n- [Installation](#installation)\n- [API Reference](#api-reference)\n- [Configuration](#configuration)\n- [Development](#development)\n\n## Features\n\n- Full implementation of the Model Context Protocol specification\n- Seamless integration with ScraperAPI for web scraping\n- Simple setup with Docker or Python\n\n## Architecture\n\n```\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 LLM Client \u2502\u2500\u2500\u2500\u2500\u25b6\u2502 Scraper MCP Server \u2502\u2500\u2500\u2500\u2500\u25b6\u2502 AI Model \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u25bc\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 ScraperAPI API \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## Installation\n\nThe ScraperAPI MCP Server is designed to run as a local server on your machine, your LLM client will launch it automatically when configured.\n\n### Prerequisites\n- Python 3.11+\n- Docker (optional)\n\n### Using Python\n\nAdd this to your client configuration file:\n\n```json\n{\n \"mcpServers\": {\n \"ScraperAPI\": {\n \"command\": \"python\",\n \"args\": [\"-m\", \"scraperapi_mcp_server\"],\n \"env\": {\n \"API_KEY\": \"<YOUR_SCRAPERAPI_API_KEY>\"\n }\n }\n }\n}\n```\n\n### Using Docker \n\nAdd this to your client configuration file:\n\n```json\n{\n \"mcpServers\": {\n \"ScraperAPI\": {\n \"command\": \"docker\",\n \"args\": [\n \"run\",\n \"-i\",\n \"-e\",\n \"API_KEY=${API_KEY}\",\n \"--rm\",\n \"scraperapi-mcp-server\"]\n }\n }\n}\n```\n\n</br>\n\n> [!TIP]\n>\n> If your command is not working (for example, you see a `package not found` error when trying to start the server), double-check the path you are using. To find the correct path, activate your virtual environment first, then run:\n> ```bash\n> which <YOUR_COMMAND>\n> ```\n\n## API Reference\n\n### Available Tools\n\n- `scrape`\n - Scrape a URL from the internet using ScraperAPI\n - Parameters:\n - `url` (string, required): URL to scrape\n - `render` (boolean, optional): Whether to render the page using JavaScript. Defaults to `False`. Set to `True` only if the page requires JavaScript rendering to display its content.\n - `country_code` (string, optional): Activate country geotargeting (ISO 2-letter code)\n - `premium` (boolean, optional): Activate premium residential and mobile IPs\n - `ultra_premium` (boolean, optional): Activate advanced bypass mechanisms. Can not combine with `premium`\n - `device_type` (string, optional): Set request to use `mobile` or `desktop` user agents\n - Returns: The scraped content as a string\n\n### Prompt templates\n\n- Please scrape this URL `<URL>`. If you receive a 500 server error identify the website's geo-targeting and add the corresponding country_code to overcome geo-restrictions. If errors continues, upgrade the request to use premium proxies by adding premium=true. For persistent failures, activate ultra_premium=true to use enhanced anti-blocking measures.\n- Can you scrape URL `<URL>` to extract `<SPECIFIC_DATA>`? If the request returns missing/incomplete`<SPECIFIC_DATA>`, set render=true to enable JS Rendering.\n\n## Configuration\n\n### Settings\n\n- `API_KEY`: Your ScraperAPI API key.\n\n### Configure for Claude Desktop App\n\n1. Open Claude Desktop Application\n2. Access the Settings Menu\n3. Click on the settings icon (typically a gear or three dots in the upper right corner)\n4. Select the \"Developer\" tab\n5. Click on \"Edit Config\" and paste [the JSON configuration file](#installation).\n\n## Development\n\n### Local setup\n\n1. **Clone the repository:**\n ```bash\n git clone https://github.com/scraperapi/scraperapi-mcp\n cd scraperapi-mcp\n ```\n\n2. **Install dependencies and run the package locally:**\n - **Using Python:**\n ```bash\n # Create virtual environment and activate it\n python -m venv .venv\n source .venv/bin/activate # MacOS/Linux\n # OR\n .venv/Scripts/activate # Windows\n\n # Install the local package in editable mode\n pip install -e .\n ```\n\n - **Using Docker:**\n ```bash\n # Build the Docker image locally\n docker build -t scraperapi-mcp-server .\n ```\n\n### Run the server\n - **Using Python:**\n ```bash\n python -m scraperapi_mcp_server\n ```\n\n - **Using Docker:**\n ```bash\n # Run the Docker container with your API key\n docker run -e API_KEY=<YOUR_SCRAPERAPI_API_KEY> scraperapi-mcp-server\n ```\n\n### Debug\n\n```bash\npython3 -m scraperapi_mcp_server --debug\n```\n\n### Testing\n\nThis project uses [pytest](https://docs.pytest.org/en/stable/) for testing.\n\n#### Install Test Dependencies\n```bash\n# Install the package with test dependencies\npip install -e \".[test]\"\n```\n\n#### Running Tests\n\n```bash\n# Run All Tests\npytest\n\n# Run Specific Test\npytest <TEST_FILE_PATH>\n```",
"bugtrack_url": null,
"license": null,
"summary": "ScraperAPI MCP server",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [
"http",
" mcp",
" llm",
" automation",
" scraping"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7cc951b94a58421c19f4e323be3b920473b9c1b371e9e14fc3c3efa9e393e0cb",
"md5": "3d63b87bdaa05c6a7c8018418fb3eef3",
"sha256": "56d1eb6c548217394af59eccac65cf939996cff32787a559f6d17c332808c8b0"
},
"downloads": -1,
"filename": "scraperapi_mcp_server-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3d63b87bdaa05c6a7c8018418fb3eef3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 10675,
"upload_time": "2025-08-06T14:35:30",
"upload_time_iso_8601": "2025-08-06T14:35:30.009401Z",
"url": "https://files.pythonhosted.org/packages/7c/c9/51b94a58421c19f4e323be3b920473b9c1b371e9e14fc3c3efa9e393e0cb/scraperapi_mcp_server-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3f377a5e613e1c72184d35b46dccdec96747bdfa6c4f6dba0934f89220ca3d51",
"md5": "0734415c6d5ea642a01d6fcbd7ce6cb1",
"sha256": "ba9efb5bf26fcbd597b6fa12d66a48775cb544c9837088908406e1ae96ca6767"
},
"downloads": -1,
"filename": "scraperapi_mcp_server-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "0734415c6d5ea642a01d6fcbd7ce6cb1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 7969,
"upload_time": "2025-08-06T14:35:31",
"upload_time_iso_8601": "2025-08-06T14:35:31.295707Z",
"url": "https://files.pythonhosted.org/packages/3f/37/7a5e613e1c72184d35b46dccdec96747bdfa6c4f6dba0934f89220ca3d51/scraperapi_mcp_server-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-06 14:35:31",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "scraperapi-mcp-server"
}