crawlerdetect


Namecrawlerdetect JSON
Version 0.3.2 PyPI version JSON
download
home_pagehttps://github.com/moskrc/crawlerdetect
SummaryCrawlerDetect is a Python library designed to identify bots, crawlers, and spiders by analyzing their user agents.
upload_time2025-07-09 16:54:18
maintainerNone
docs_urlNone
authorVitalii Shishorin
requires_python<4,>=3.9
licenseMIT
keywords crawler crawler detect crawler detector crawlerdetect python crawler detect
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![test](https://github.com/moskrc/crawlerdetect/actions/workflows/python-package.yml/badge.svg)](https://github.com/moskrc/crawlerdetect/actions/workflows/python-package.yml)

# About CrawlerDetect

This is a Python wrapper for [CrawlerDetect](https://github.com/JayBizzle/Crawler-Detect) a web crawler detection library. It helps identify
bots, crawlers, and spiders using the user agent and other HTTP headers. Currently, it can detect
over 3,678 bots, spiders, and crawlers.

# How to install
```bash
$ pip install crawlerdetect
```

# How to use

## Method Reference
| camelCase | snake_case | Description                       |
|-----------|------------|-----------------------------------|
| `isCrawler()` | `is_crawler()` | Check if user agent is a crawler  |
| `getMatches()` | `get_matches()` | Get the name of detected crawlers |

## Variant 1
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected
```

## Variant 2
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(user_agent='Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit (KHTML, like Gecko) Mobile (compatible; Yahoo Ad monitoring; https://help.yahoo.com/kb/yahoo-ad-monitoring-SLN24857.html)')
crawler_detect.isCrawler()
# true if crawler user agent detected
```

## Variant 3
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(headers={'DOCUMENT_ROOT': '/home/test/public_html', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_FROM': 'googlebot(at)googlebot.com', 'HTTP_HOST': 'www.test.com', 'HTTP_PRAGMA': 'no-cache', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36', 'PATH': '/bin:/usr/bin', 'QUERY_STRING': 'order=closingDate', 'REDIRECT_STATUS': '200', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '3360', 'REQUEST_METHOD': 'GET', 'REQUEST_URI': '/?test=testing', 'SCRIPT_FILENAME': '/home/test/public_html/index.php', 'SCRIPT_NAME': '/index.php', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': 'webmaster@test.com', 'SERVER_NAME': 'www.test.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache', 'UNIQUE_ID': 'Vx6MENRxerBUSDEQgFLAAAAAS', 'PHP_SELF': '/index.php', 'REQUEST_TIME_FLOAT': 1461619728.0705, 'REQUEST_TIME': 1461619728})
crawler_detect.isCrawler()
# true if crawler user agent detected
```
## Output the name of the bot that matched (if any)
```Python
from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected
crawler_detect.getMatches()
# Sosospider
```

## Get version of the library
```Python
import crawlerdetect
crawlerdetect.__version__
```

# Contributing

The patterns and testcases are synced from the PHP repo. If you find a bot/spider/crawler user agent that crawlerdetect fails to detect, please submit a pull request with the regex pattern and a testcase to the [upstream PHP repo](https://github.com/JayBizzle/Crawler-Detect).

Failing that, just create an issue with the user agent you have found, and we'll take it from there :)

# Development

## Setup
```bash
$ poetry install
```

## Running tests
```bash
$ poetry run pytest
```

## Update crawlers from upstream PHP repo
```bash
$ ./update_data.sh
```

## Bump version
```bash
$ poetry run bump-my-version bump [patch|minor|major]
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/moskrc/crawlerdetect",
    "name": "crawlerdetect",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4,>=3.9",
    "maintainer_email": null,
    "keywords": "crawler, crawler detect, crawler detector, crawlerdetect, python crawler detect",
    "author": "Vitalii Shishorin",
    "author_email": "moskrc@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/f6/97/f33c16f3ececdfb98582ef0559aee745bcc9fe00d9873fbdb74ad06d7b8b/crawlerdetect-0.3.2.tar.gz",
    "platform": null,
    "description": "[![test](https://github.com/moskrc/crawlerdetect/actions/workflows/python-package.yml/badge.svg)](https://github.com/moskrc/crawlerdetect/actions/workflows/python-package.yml)\n\n# About CrawlerDetect\n\nThis is a Python wrapper for [CrawlerDetect](https://github.com/JayBizzle/Crawler-Detect) a web crawler detection library. It helps identify\nbots, crawlers, and spiders using the user agent and other HTTP headers. Currently, it can detect\nover 3,678 bots, spiders, and crawlers.\n\n# How to install\n```bash\n$ pip install crawlerdetect\n```\n\n# How to use\n\n## Method Reference\n| camelCase | snake_case | Description                       |\n|-----------|------------|-----------------------------------|\n| `isCrawler()` | `is_crawler()` | Check if user agent is a crawler  |\n| `getMatches()` | `get_matches()` | Get the name of detected crawlers |\n\n## Variant 1\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect()\ncrawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')\n# true if crawler user agent detected\n```\n\n## Variant 2\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect(user_agent='Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit (KHTML, like Gecko) Mobile (compatible; Yahoo Ad monitoring; https://help.yahoo.com/kb/yahoo-ad-monitoring-SLN24857.html)')\ncrawler_detect.isCrawler()\n# true if crawler user agent detected\n```\n\n## Variant 3\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect(headers={'DOCUMENT_ROOT': '/home/test/public_html', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_FROM': 'googlebot(at)googlebot.com', 'HTTP_HOST': 'www.test.com', 'HTTP_PRAGMA': 'no-cache', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36', 'PATH': '/bin:/usr/bin', 'QUERY_STRING': 'order=closingDate', 'REDIRECT_STATUS': '200', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '3360', 'REQUEST_METHOD': 'GET', 'REQUEST_URI': '/?test=testing', 'SCRIPT_FILENAME': '/home/test/public_html/index.php', 'SCRIPT_NAME': '/index.php', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': 'webmaster@test.com', 'SERVER_NAME': 'www.test.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache', 'UNIQUE_ID': 'Vx6MENRxerBUSDEQgFLAAAAAS', 'PHP_SELF': '/index.php', 'REQUEST_TIME_FLOAT': 1461619728.0705, 'REQUEST_TIME': 1461619728})\ncrawler_detect.isCrawler()\n# true if crawler user agent detected\n```\n## Output the name of the bot that matched (if any)\n```Python\nfrom crawlerdetect import CrawlerDetect\ncrawler_detect = CrawlerDetect()\ncrawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')\n# true if crawler user agent detected\ncrawler_detect.getMatches()\n# Sosospider\n```\n\n## Get version of the library\n```Python\nimport crawlerdetect\ncrawlerdetect.__version__\n```\n\n# Contributing\n\nThe patterns and testcases are synced from the PHP repo. If you find a bot/spider/crawler user agent that crawlerdetect fails to detect, please submit a pull request with the regex pattern and a testcase to the [upstream PHP repo](https://github.com/JayBizzle/Crawler-Detect).\n\nFailing that, just create an issue with the user agent you have found, and we'll take it from there :)\n\n# Development\n\n## Setup\n```bash\n$ poetry install\n```\n\n## Running tests\n```bash\n$ poetry run pytest\n```\n\n## Update crawlers from upstream PHP repo\n```bash\n$ ./update_data.sh\n```\n\n## Bump version\n```bash\n$ poetry run bump-my-version bump [patch|minor|major]\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "CrawlerDetect is a Python library designed to identify bots, crawlers, and spiders by analyzing their user agents.",
    "version": "0.3.2",
    "project_urls": {
        "Documentation": "https://github.com/moskrc/crawlerdetect",
        "Homepage": "https://github.com/moskrc/crawlerdetect",
        "Repository": "https://github.com/moskrc/crawlerdetect"
    },
    "split_keywords": [
        "crawler",
        " crawler detect",
        " crawler detector",
        " crawlerdetect",
        " python crawler detect"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e6471be5b2bc4ce8ab32e592817946017efb89f65957c0d17aa346b97c023f18",
                "md5": "53eb94baa5755d4b7d0997b8eaf34f31",
                "sha256": "42e53a1fca1f99fc9459d5c699300e94e02d0a23d1cec4750fe37ce61e6e8441"
            },
            "downloads": -1,
            "filename": "crawlerdetect-0.3.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "53eb94baa5755d4b7d0997b8eaf34f31",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.9",
            "size": 16507,
            "upload_time": "2025-07-09T16:54:16",
            "upload_time_iso_8601": "2025-07-09T16:54:16.573187Z",
            "url": "https://files.pythonhosted.org/packages/e6/47/1be5b2bc4ce8ab32e592817946017efb89f65957c0d17aa346b97c023f18/crawlerdetect-0.3.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f697f33c16f3ececdfb98582ef0559aee745bcc9fe00d9873fbdb74ad06d7b8b",
                "md5": "0815a39e0e51686c1feb1ed2ac692c26",
                "sha256": "1c2f9ccbb786c756c4f5bce62503ac0792b88b0291df6dbd5633f3e9c8a7f432"
            },
            "downloads": -1,
            "filename": "crawlerdetect-0.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "0815a39e0e51686c1feb1ed2ac692c26",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.9",
            "size": 16775,
            "upload_time": "2025-07-09T16:54:18",
            "upload_time_iso_8601": "2025-07-09T16:54:18.415182Z",
            "url": "https://files.pythonhosted.org/packages/f6/97/f33c16f3ececdfb98582ef0559aee745bcc9fe00d9873fbdb74ad06d7b8b/crawlerdetect-0.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-09 16:54:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "moskrc",
    "github_project": "crawlerdetect",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "crawlerdetect"
}
        
Elapsed time: 0.52404s