scrapy-proxycrawl-middleware


Namescrapy-proxycrawl-middleware JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://github.com/proxycrawl/scrapy-proxycrawl-middleware
SummaryScrapy ProxyCrawl Proxy Middleware: ProxyCrawl interfacing middleware for Scrapy
upload_time2023-07-05 05:21:17
maintainer
docs_urlNone
authorProxyCrawl
requires_python
licenseApache-2.0
keywords scrapy middleware scraping scraper crawler crawling proxycrawl api
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # DEPRECATION NOTICE

> :warning: **IMPORTANT:** This package is no longer maintained or supported. For the latest updates, please use our new package at [scrapy-crawlbase-middleware](https://github.com/crawlbase-source/scrapy-crawlbase-middleware).

---

# ProxyCrawl API middleware for Scrapy

Processes [Scrapy](http://scrapy.org/) requests using [ProxyCrawl](https://proxycrawl.com) services either with Normal or Javascript tokens

## Installing

Choose a way of installing:

- Clone the repository inside your Scrapy project and run the following:

```bash
python setup.py install
```

- Or use [PyPi](https://pypi.org/project/scrapy-proxycrawl-middleware/) Python package manager. `pip install scrapy-proxycrawl-middleware`

Then in your Scrapy `settings.py` add the following lines:

```python
# Activate the middleware
PROXYCRAWL_ENABLED = True

# The ProxyCrawl API token you wish to use, either normal of javascript token
PROXYCRAWL_TOKEN = 'your token'

# Enable the middleware
DOWNLOADER_MIDDLEWARES = {
    'scrapy_proxycrawl.ProxyCrawlMiddleware': 610
}
```

## Usage

Use the scrapy_proxycrawl.ProxyCrawlRequest instead of the scrapy built-in Request.
The scrapy_proxycrawl.ProxyCrawlRequest accepts additional arguments, used in Proxy Crawl API:

```python
from scrapy_proxycrawl import ProxyCrawlRequest

class ExampleScraper(Spider):

    def start_requests(self):
        yield ProxyCrawlRequest(
            "http://target-url",
            callback=self.parse_result
            device='desktop',
            country='US',
            page_wait=1000,
            ajax_wait=True,
            dont_filter=True
        )
```

The target url will be replaced with proxy crawl url and parameters will be encoded into the url by the middleware automatically.

If you have questions or need help using the library, please open an issue or [contact us](https://proxycrawl.com/contact).

---

Copyright 2023 ProxyCrawl
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/proxycrawl/scrapy-proxycrawl-middleware",
    "name": "scrapy-proxycrawl-middleware",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "scrapy middleware scraping scraper crawler crawling proxycrawl api",
    "author": "ProxyCrawl",
    "author_email": "info@proxycrawl.com",
    "download_url": "https://files.pythonhosted.org/packages/60/90/39114f4c72a63e7d7513474967c45cca8857a478bb1cbe169c8a9abe020f/scrapy-proxycrawl-middleware-1.2.0.tar.gz",
    "platform": null,
    "description": "# DEPRECATION NOTICE\n\n> :warning: **IMPORTANT:** This package is no longer maintained or supported. For the latest updates, please use our new package at [scrapy-crawlbase-middleware](https://github.com/crawlbase-source/scrapy-crawlbase-middleware).\n\n---\n\n# ProxyCrawl API middleware for Scrapy\n\nProcesses [Scrapy](http://scrapy.org/) requests using [ProxyCrawl](https://proxycrawl.com) services either with Normal or Javascript tokens\n\n## Installing\n\nChoose a way of installing:\n\n- Clone the repository inside your Scrapy project and run the following:\n\n```bash\npython setup.py install\n```\n\n- Or use [PyPi](https://pypi.org/project/scrapy-proxycrawl-middleware/) Python package manager. `pip install scrapy-proxycrawl-middleware`\n\nThen in your Scrapy `settings.py` add the following lines:\n\n```python\n# Activate the middleware\nPROXYCRAWL_ENABLED = True\n\n# The ProxyCrawl API token you wish to use, either normal of javascript token\nPROXYCRAWL_TOKEN = 'your token'\n\n# Enable the middleware\nDOWNLOADER_MIDDLEWARES = {\n    'scrapy_proxycrawl.ProxyCrawlMiddleware': 610\n}\n```\n\n## Usage\n\nUse the scrapy_proxycrawl.ProxyCrawlRequest instead of the scrapy built-in Request.\nThe scrapy_proxycrawl.ProxyCrawlRequest accepts additional arguments, used in Proxy Crawl API:\n\n```python\nfrom scrapy_proxycrawl import ProxyCrawlRequest\n\nclass ExampleScraper(Spider):\n\n    def start_requests(self):\n        yield ProxyCrawlRequest(\n            \"http://target-url\",\n            callback=self.parse_result\n            device='desktop',\n            country='US',\n            page_wait=1000,\n            ajax_wait=True,\n            dont_filter=True\n        )\n```\n\nThe target url will be replaced with proxy crawl url and parameters will be encoded into the url by the middleware automatically.\n\nIf you have questions or need help using the library, please open an issue or [contact us](https://proxycrawl.com/contact).\n\n---\n\nCopyright 2023 ProxyCrawl",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Scrapy ProxyCrawl Proxy Middleware: ProxyCrawl interfacing middleware for Scrapy",
    "version": "1.2.0",
    "project_urls": {
        "Homepage": "https://github.com/proxycrawl/scrapy-proxycrawl-middleware"
    },
    "split_keywords": [
        "scrapy",
        "middleware",
        "scraping",
        "scraper",
        "crawler",
        "crawling",
        "proxycrawl",
        "api"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "609039114f4c72a63e7d7513474967c45cca8857a478bb1cbe169c8a9abe020f",
                "md5": "4aa14edf13bc3fad8d8c384e920d94c7",
                "sha256": "efa750d0a86ab5ae9da326558ba33d5ee79da0f58746484e99e7c554cbd5c0ac"
            },
            "downloads": -1,
            "filename": "scrapy-proxycrawl-middleware-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "4aa14edf13bc3fad8d8c384e920d94c7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 5333,
            "upload_time": "2023-07-05T05:21:17",
            "upload_time_iso_8601": "2023-07-05T05:21:17.745075Z",
            "url": "https://files.pythonhosted.org/packages/60/90/39114f4c72a63e7d7513474967c45cca8857a478bb1cbe169c8a9abe020f/scrapy-proxycrawl-middleware-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-05 05:21:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "proxycrawl",
    "github_project": "scrapy-proxycrawl-middleware",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "scrapy-proxycrawl-middleware"
}
        
Elapsed time: 0.10136s