## About CrawlerDetect
This is a Python wrapper for [CrawlerDetect](https://github.com/JayBizzle/Crawler-Detect) - the web crawler detection library
It helps to detect bots/crawlers/spiders via the user agent and other HTTP-headers. Currently able to detect > 1,000's of bots/spiders/crawlers.
### Installation
Run `pip install crawlerdetect`
### Usage
#### Variant 1
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected
```
#### Variant 2
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(user_agent='Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit (KHTML, like Gecko) Mobile (compatible; Yahoo Ad monitoring; https://help.yahoo.com/kb/yahoo-ad-monitoring-SLN24857.html)')
crawler_detect.isCrawler()
# true if crawler user agent detected
```
#### Variant 3
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(headers={'DOCUMENT_ROOT': '/home/test/public_html', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_FROM': 'googlebot(at)googlebot.com', 'HTTP_HOST': 'www.test.com', 'HTTP_PRAGMA': 'no-cache', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36', 'PATH': '/bin:/usr/bin', 'QUERY_STRING': 'order=closingDate', 'REDIRECT_STATUS': '200', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '3360', 'REQUEST_METHOD': 'GET', 'REQUEST_URI': '/?test=testing', 'SCRIPT_FILENAME': '/home/test/public_html/index.php', 'SCRIPT_NAME': '/index.php', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': 'webmaster@test.com', 'SERVER_NAME': 'www.test.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache', 'UNIQUE_ID': 'Vx6MENRxerBUSDEQgFLAAAAAS', 'PHP_SELF': '/index.php', 'REQUEST_TIME_FLOAT': 1461619728.0705, 'REQUEST_TIME': 1461619728})
crawler_detect.isCrawler()
# true if crawler user agent detected
```
#### Output the name of the bot that matched (if any)
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected
crawler_detect.getMatches()
# Sosospider
```
#### Get version of the library
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.version
```
### Contributing
If you find a bot/spider/crawler user agent that CrawlerDetect fails to detect, please submit a pull request with the regex pattern added to the array in `providers/crawlers.py` and add the failing user agent to `tests/crawlers.txt`.
Failing that, just create an issue with the user agent you have found, and we'll take it from there :)
### ES6 Library
To use this library with NodeJS or any ES6 application based, check out [es6-crawler-detect](https://github.com/JefferyHus/es6-crawler-detect).
### .NET Library
To use this library in a .net standard (including .net core) based project, check out [NetCrawlerDetect](https://github.com/gplumb/NetCrawlerDetect).
### Nette Extension
To use this library with the Nette framework, checkout [NetteCrawlerDetect](https://github.com/JanGalek/Crawler-Detect).
### Ruby Gem
To use this library with Ruby on Rails or any Ruby-based application, check out [crawler_detect](https://github.com/loadkpi/crawler_detect) gem.
_Parts of this class are based on the brilliant [MobileDetect](https://github.com/serbanghita/Mobile-Detect)_
[![Analytics](https://ga-beacon.appspot.com/UA-72430465-1/Crawler-Detect/readme?pixel)](https://github.com/JayBizzle/Crawler-Detect)
Raw data
{
"_id": null,
"home_page": "https://github.com/moskrc/CrawlerDetect",
"name": "crawlerdetect",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.4",
"maintainer_email": null,
"keywords": "crawler, crawler detect, crawler detector, crawlerdetect, python crawler detect",
"author": "Vitalii Shishorin",
"author_email": "moskrc@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/94/84/cafde6210d1aeae0b626bd429b8ace840f0aed17c2fe5fc2c30804c091bd/crawlerdetect-0.1.8.tar.gz",
"platform": "any",
"description": "## About CrawlerDetect\n\nThis is a Python wrapper for [CrawlerDetect](https://github.com/JayBizzle/Crawler-Detect) - the web crawler detection library\nIt helps to detect bots/crawlers/spiders via the user agent and other HTTP-headers. Currently able to detect > 1,000's of bots/spiders/crawlers.\n\n### Installation\nRun `pip install crawlerdetect`\n\n### Usage\n\n#### Variant 1\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect()\ncrawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')\n# true if crawler user agent detected\n```\n\n#### Variant 2\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect(user_agent='Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit (KHTML, like Gecko) Mobile (compatible; Yahoo Ad monitoring; https://help.yahoo.com/kb/yahoo-ad-monitoring-SLN24857.html)')\ncrawler_detect.isCrawler()\n# true if crawler user agent detected\n```\n\n#### Variant 3\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect(headers={'DOCUMENT_ROOT': '/home/test/public_html', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_FROM': 'googlebot(at)googlebot.com', 'HTTP_HOST': 'www.test.com', 'HTTP_PRAGMA': 'no-cache', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36', 'PATH': '/bin:/usr/bin', 'QUERY_STRING': 'order=closingDate', 'REDIRECT_STATUS': '200', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '3360', 'REQUEST_METHOD': 'GET', 'REQUEST_URI': '/?test=testing', 'SCRIPT_FILENAME': '/home/test/public_html/index.php', 'SCRIPT_NAME': '/index.php', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': 'webmaster@test.com', 'SERVER_NAME': 'www.test.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache', 'UNIQUE_ID': 'Vx6MENRxerBUSDEQgFLAAAAAS', 'PHP_SELF': '/index.php', 'REQUEST_TIME_FLOAT': 1461619728.0705, 'REQUEST_TIME': 1461619728})\ncrawler_detect.isCrawler()\n# true if crawler user agent detected\n```\n#### Output the name of the bot that matched (if any)\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect()\ncrawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')\n# true if crawler user agent detected\ncrawler_detect.getMatches()\n# Sosospider\n```\n\n#### Get version of the library\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect()\ncrawler_detect.version\n```\n\n### Contributing\nIf you find a bot/spider/crawler user agent that CrawlerDetect fails to detect, please submit a pull request with the regex pattern added to the array in `providers/crawlers.py` and add the failing user agent to `tests/crawlers.txt`.\n\nFailing that, just create an issue with the user agent you have found, and we'll take it from there :)\n\n### ES6 Library\nTo use this library with NodeJS or any ES6 application based, check out [es6-crawler-detect](https://github.com/JefferyHus/es6-crawler-detect).\n\n### .NET Library\nTo use this library in a .net standard (including .net core) based project, check out [NetCrawlerDetect](https://github.com/gplumb/NetCrawlerDetect).\n\n### Nette Extension\nTo use this library with the Nette framework, checkout [NetteCrawlerDetect](https://github.com/JanGalek/Crawler-Detect).\n\n### Ruby Gem\n\nTo use this library with Ruby on Rails or any Ruby-based application, check out [crawler_detect](https://github.com/loadkpi/crawler_detect) gem.\n\n_Parts of this class are based on the brilliant [MobileDetect](https://github.com/serbanghita/Mobile-Detect)_\n\n[![Analytics](https://ga-beacon.appspot.com/UA-72430465-1/Crawler-Detect/readme?pixel)](https://github.com/JayBizzle/Crawler-Detect)\n",
"bugtrack_url": null,
"license": "BSD",
"summary": "CrawlerDetect is a Python class for detecting bots/crawlers/spiders via the user agent.",
"version": "0.1.8",
"project_urls": {
"Documentation": "https://crawlerdetect.readthedocs.io",
"Download": "https://github.com/moskrc/CrawlerDetect/tarball/0.1.8",
"Homepage": "https://github.com/moskrc/CrawlerDetect"
},
"split_keywords": [
"crawler",
" crawler detect",
" crawler detector",
" crawlerdetect",
" python crawler detect"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "21e0c147966e66cf5c30caded26eaf73babd17d94f4390fc32a61532c2e14109",
"md5": "cb948776793bfadb7960d6601e18f290",
"sha256": "5de80ec469d1bcc9ef62c6b3e1611f11f15436438973fe8237f9d7071dcf1b4b"
},
"downloads": -1,
"filename": "crawlerdetect-0.1.8-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "cb948776793bfadb7960d6601e18f290",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": "<4,>=3.4",
"size": 18486,
"upload_time": "2024-11-14T16:01:25",
"upload_time_iso_8601": "2024-11-14T16:01:25.048239Z",
"url": "https://files.pythonhosted.org/packages/21/e0/c147966e66cf5c30caded26eaf73babd17d94f4390fc32a61532c2e14109/crawlerdetect-0.1.8-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9484cafde6210d1aeae0b626bd429b8ace840f0aed17c2fe5fc2c30804c091bd",
"md5": "3c6d659e26a52d1ff07261e59649663c",
"sha256": "a45286c525542900013849c57b2ac59054821ff7e641e871b740b7e2f34afdf6"
},
"downloads": -1,
"filename": "crawlerdetect-0.1.8.tar.gz",
"has_sig": false,
"md5_digest": "3c6d659e26a52d1ff07261e59649663c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.4",
"size": 20287,
"upload_time": "2024-11-14T16:01:26",
"upload_time_iso_8601": "2024-11-14T16:01:26.949039Z",
"url": "https://files.pythonhosted.org/packages/94/84/cafde6210d1aeae0b626bd429b8ace840f0aed17c2fe5fc2c30804c091bd/crawlerdetect-0.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-14 16:01:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "moskrc",
"github_project": "CrawlerDetect",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "crawlerdetect"
}