# Crawlbase API middleware for Scrapy
Processes [Scrapy](http://scrapy.org/) requests using [Crawlbase](https://crawlbase.com) services either with Normal or Javascript tokens
## Installing
Choose a way of installing:
- Clone the repository inside your Scrapy project and run the following:
```bash
python setup.py install
```
- Or use [PyPi](https://pypi.org/project/scrapy-crawlbase-middleware/) Python package manager. `pip install scrapy-crawlbase-middleware`
Then in your Scrapy `settings.py` add the following lines:
```python
# Activate the middleware
CRAWLBASE_ENABLED = True
# The Crawlbase API token you wish to use, either normal of javascript token
CRAWLBASE_TOKEN = 'your token'
# Enable the middleware
DOWNLOADER_MIDDLEWARES = {
'scrapy_crawlbase.CrawlbaseMiddleware': 610
}
```
## Usage
Use the scrapy_crawlbase.CrawlbaseRequest instead of the scrapy built-in Request.
The scrapy_crawlbase.CrawlbaseRequest accepts additional arguments, used in Proxy Crawl API:
```python
from scrapy_crawlbase import CrawlbaseRequest
class ExampleScraper(Spider):
def start_requests(self):
yield CrawlbaseRequest(
"http://target-url",
callback=self.parse_result
device='desktop',
country='US',
page_wait=1000,
ajax_wait=True,
dont_filter=True
)
```
The target url will be replaced with proxy crawl url and parameters will be encoded into the url by the middleware automatically.
If you have questions or need help using the library, please open an issue or [contact us](https://crawlbase.com/contact).
---
Copyright 2023 Crawlbase
Raw data
{
"_id": null,
"home_page": "https://github.com/crawlbase-source/scrapy-crawlbase-middleware",
"name": "scrapy-crawlbase-middleware",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "scrapy middleware scraping scraper crawler crawling crawlbase api",
"author": "Crawlbase",
"author_email": "info@crawlbase.com",
"download_url": "https://files.pythonhosted.org/packages/ea/01/a84968a50b3e15e77b17c97ecee416f308c5249aa9cbe76c7418da3cee5b/scrapy-crawlbase-middleware-1.0.0.tar.gz",
"platform": null,
"description": "# Crawlbase API middleware for Scrapy\n\nProcesses [Scrapy](http://scrapy.org/) requests using [Crawlbase](https://crawlbase.com) services either with Normal or Javascript tokens\n\n## Installing\n\nChoose a way of installing:\n\n- Clone the repository inside your Scrapy project and run the following:\n\n```bash\npython setup.py install\n```\n\n- Or use [PyPi](https://pypi.org/project/scrapy-crawlbase-middleware/) Python package manager. `pip install scrapy-crawlbase-middleware`\n\nThen in your Scrapy `settings.py` add the following lines:\n\n```python\n# Activate the middleware\nCRAWLBASE_ENABLED = True\n\n# The Crawlbase API token you wish to use, either normal of javascript token\nCRAWLBASE_TOKEN = 'your token'\n\n# Enable the middleware\nDOWNLOADER_MIDDLEWARES = {\n 'scrapy_crawlbase.CrawlbaseMiddleware': 610\n}\n```\n\n## Usage\n\nUse the scrapy_crawlbase.CrawlbaseRequest instead of the scrapy built-in Request.\nThe scrapy_crawlbase.CrawlbaseRequest accepts additional arguments, used in Proxy Crawl API:\n\n```python\nfrom scrapy_crawlbase import CrawlbaseRequest\n\nclass ExampleScraper(Spider):\n\n def start_requests(self):\n yield CrawlbaseRequest(\n \"http://target-url\",\n callback=self.parse_result\n device='desktop',\n country='US',\n page_wait=1000,\n ajax_wait=True,\n dont_filter=True\n )\n```\n\nThe target url will be replaced with proxy crawl url and parameters will be encoded into the url by the middleware automatically.\n\nIf you have questions or need help using the library, please open an issue or [contact us](https://crawlbase.com/contact).\n\n---\n\nCopyright 2023 Crawlbase",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Scrapy Crawlbase Proxy Middleware: Crawlbase interfacing middleware for Scrapy",
"version": "1.0.0",
"project_urls": {
"Homepage": "https://github.com/crawlbase-source/scrapy-crawlbase-middleware"
},
"split_keywords": [
"scrapy",
"middleware",
"scraping",
"scraper",
"crawler",
"crawling",
"crawlbase",
"api"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ea01a84968a50b3e15e77b17c97ecee416f308c5249aa9cbe76c7418da3cee5b",
"md5": "e840a09b5454fe7bcdd74b4769aa1734",
"sha256": "65caf98bd4d842be76ca72420b09c04b30e9fa650b51b3572378844b742a7aff"
},
"downloads": -1,
"filename": "scrapy-crawlbase-middleware-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "e840a09b5454fe7bcdd74b4769aa1734",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 5011,
"upload_time": "2023-07-05T05:21:57",
"upload_time_iso_8601": "2023-07-05T05:21:57.464493Z",
"url": "https://files.pythonhosted.org/packages/ea/01/a84968a50b3e15e77b17c97ecee416f308c5249aa9cbe76c7418da3cee5b/scrapy-crawlbase-middleware-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-05 05:21:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "crawlbase-source",
"github_project": "scrapy-crawlbase-middleware",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "scrapy-crawlbase-middleware"
}