PyIntruder


NamePyIntruder JSON
Version 0.1.4 PyPI version JSON
home_pagehttps://github.com/sirpsycho/PyIntruder
SummaryCommand line URL fuzzer
upload_time2017-01-11 22:03:55
maintainer
docs_urlNone
authorsirpsycho
requires_python
licenseMIT
keywords pyintruder http fuzzer url scan
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
Coveralis test coverage No Coveralis.
            # PyIntruder
Simple Command Line URL Fuzzer


```
./PyIntruder.py -h
Usage: ./PyIntruder.py [options] <base url> <payload list>
(Use '$' as variable in url that will be swapped out with each payload)

Example:  PyIntruder.py http://www.example.com/file/$.pdf payloads.txt

Options:
  -h, --help         show this help message and exit
  -r, --redir        Allow HTTP redirects
  -s, --save         Save HTTP response content to files
  -o OUT, --out=OUT  Directory to save HTTP responses
 ```


# Description
This script allows a user to quickly test many similar URLs and analyze responses.  This can act as a simplified alternative to Burp Suite's "Intruder" tool (which heavily rate-limits requests in the free version......).

# Use Case

As an example, say you observe the following URL:
```
http://www.example.com/file/74
```
When accessing the URL, your browser redirects you to a page which automatically downloads a file (this could be any type of file - pdf, doc, exe, mp3, etc.).  This is a common method of allowing users of a website to download content.  In this particular example, the URL above seems to beg the question: "I wonder what I might find at 'http://www.example.com/file/75'? ...or at 'http://www.example.com/file/73'?"

This program automates the process of attempting to browse to each of these potentially-interesting URLs by automatically cycling through a list of custom "payloads". A user can create a list of payloads (say, for example, a list of numbers from 1 through 100) and try each payload in a particular position within the URL (use the dollar-sign character to tell the program where to swap out your payloads within the URL).

```
./PyIntruder.py http://www.example.com/file/$ payloads.txt
```
In the above command, where "payloads.txt" is a text file containing a list of numbers 1 - 100 (one number per line), a user can quickly determine which URLs lead somewhere interesting by comparing HTTP status code, Content-Length, or response time:

sample output:
```
root@kali:~# ./PyIntruder.py http://www.example.com/file/$ payloads.txt
Status    Length    Time      Host
----------------------------------------
200       0         110.536   http://www.example.com/file/01
200       0         112.312   http://www.example.com/file/02
302       0         104.266   http://www.example.com/file/03

...

200       0         137.111   http://www.example.com/file/73
302       0         120.607   http://www.example.com/file/74
302       0         108.553   http://www.example.com/file/75

...
```
In this case, it looks like the interesting URLs are the ones that return a 302 HTTP status code (redirect).  If all URLs are redirecting and you cant find any other distinguishing factors, try using the "-r" option to enable redirection.  The redirected results will often contain more interesting/varying content-lengths.  The program defaults to disabling the following of redirects.  The reason for this is that it is usually much faster and a little less noisy/intrusive, which is good when running an initial scan.


In order to download whatever files might be available at each of these links, you can run a command like this:
```
./PyIntruder.py -rs -o /path/to/save/files http://www.example.com/file/$ payloads-refined.txt
```

- The "r" option tells the program to follow redirects
- The "s" option tells the program to save HTTP responses
- The "o" option tells the program where you want to save the responses on your local machine (this option is optional; by default, if "s" is used without "o", it will save files to the current directory)
- "payloads-refined.txt" is your refined list of payloads. This can be useful in a case like this if you want to weed out a bunch of URLs that you found out don't go anywhere interesting.


#Dependencies
If it's not already installed, make sure to [install Requests](http://docs.python-requests.org/en/master/user/install/) (try running "pip install requests").
            

Raw data

            {
    "maintainer": "", 
    "docs_url": null, 
    "requires_python": "", 
    "maintainer_email": "", 
    "cheesecake_code_kwalitee_id": null, 
    "coveralis": false, 
    "keywords": "pyintruder,http,fuzzer,url,scan", 
    "upload_time": "2017-01-11 22:03:55", 
    "author": "sirpsycho", 
    "home_page": "https://github.com/sirpsycho/PyIntruder", 
    "github_user": "sirpsycho", 
    "download_url": "", 
    "platform": "UNKNOWN", 
    "version": "0.1.4", 
    "cheesecake_documentation_id": null, 
    "description": "# PyIntruder\nSimple Command Line URL Fuzzer\n\n\n```\n./PyIntruder.py -h\nUsage: ./PyIntruder.py [options] <base url> <payload list>\n(Use '$' as variable in url that will be swapped out with each payload)\n\nExample:  PyIntruder.py http://www.example.com/file/$.pdf payloads.txt\n\nOptions:\n  -h, --help         show this help message and exit\n  -r, --redir        Allow HTTP redirects\n  -s, --save         Save HTTP response content to files\n  -o OUT, --out=OUT  Directory to save HTTP responses\n ```\n\n\n# Description\nThis script allows a user to quickly test many similar URLs and analyze responses.  This can act as a simplified alternative to Burp Suite's \"Intruder\" tool (which heavily rate-limits requests in the free version......).\n\n# Use Case\n\nAs an example, say you observe the following URL:\n```\nhttp://www.example.com/file/74\n```\nWhen accessing the URL, your browser redirects you to a page which automatically downloads a file (this could be any type of file - pdf, doc, exe, mp3, etc.).  This is a common method of allowing users of a website to download content.  In this particular example, the URL above seems to beg the question: \"I wonder what I might find at 'http://www.example.com/file/75'? ...or at 'http://www.example.com/file/73'?\"\n\nThis program automates the process of attempting to browse to each of these potentially-interesting URLs by automatically cycling through a list of custom \"payloads\". A user can create a list of payloads (say, for example, a list of numbers from 1 through 100) and try each payload in a particular position within the URL (use the dollar-sign character to tell the program where to swap out your payloads within the URL).\n\n```\n./PyIntruder.py http://www.example.com/file/$ payloads.txt\n```\nIn the above command, where \"payloads.txt\" is a text file containing a list of numbers 1 - 100 (one number per line), a user can quickly determine which URLs lead somewhere interesting by comparing HTTP status code, Content-Length, or response time:\n\nsample output:\n```\nroot@kali:~# ./PyIntruder.py http://www.example.com/file/$ payloads.txt\nStatus    Length    Time      Host\n----------------------------------------\n200       0         110.536   http://www.example.com/file/01\n200       0         112.312   http://www.example.com/file/02\n302       0         104.266   http://www.example.com/file/03\n\n...\n\n200       0         137.111   http://www.example.com/file/73\n302       0         120.607   http://www.example.com/file/74\n302       0         108.553   http://www.example.com/file/75\n\n...\n```\nIn this case, it looks like the interesting URLs are the ones that return a 302 HTTP status code (redirect).  If all URLs are redirecting and you cant find any other distinguishing factors, try using the \"-r\" option to enable redirection.  The redirected results will often contain more interesting/varying content-lengths.  The program defaults to disabling the following of redirects.  The reason for this is that it is usually much faster and a little less noisy/intrusive, which is good when running an initial scan.\n\n\nIn order to download whatever files might be available at each of these links, you can run a command like this:\n```\n./PyIntruder.py -rs -o /path/to/save/files http://www.example.com/file/$ payloads-refined.txt\n```\n\n- The \"r\" option tells the program to follow redirects\n- The \"s\" option tells the program to save HTTP responses\n- The \"o\" option tells the program where you want to save the responses on your local machine (this option is optional; by default, if \"s\" is used without \"o\", it will save files to the current directory)\n- \"payloads-refined.txt\" is your refined list of payloads. This can be useful in a case like this if you want to weed out a bunch of URLs that you found out don't go anywhere interesting.\n\n\n#Dependencies\nIf it's not already installed, make sure to [install Requests](http://docs.python-requests.org/en/master/user/install/) (try running \"pip install requests\").", 
    "lcname": "pyintruder", 
    "bugtrack_url": null, 
    "github": true, 
    "name": "PyIntruder", 
    "license": "MIT", 
    "travis_ci": false, 
    "github_project": "PyIntruder", 
    "summary": "Command line URL fuzzer", 
    "split_keywords": [
        "pyintruder", 
        "http", 
        "fuzzer", 
        "url", 
        "scan"
    ], 
    "author_email": "", 
    "urls": [
        {
            "has_sig": false, 
            "upload_time": "2017-01-11T22:03:55", 
            "comment_text": "", 
            "python_version": "py2", 
            "url": "https://pypi.python.org/packages/9c/87/9df6ef286a66d0fd290e7b670425e2df86ee912887d71f299f1d486e62e1/PyIntruder-0.1.4-py2-none-any.whl", 
            "md5_digest": "fe7c470009cfe871886219e9520fbef3", 
            "downloads": 0, 
            "filename": "PyIntruder-0.1.4-py2-none-any.whl", 
            "packagetype": "bdist_wheel", 
            "path": "9c/87/9df6ef286a66d0fd290e7b670425e2df86ee912887d71f299f1d486e62e1/PyIntruder-0.1.4-py2-none-any.whl", 
            "size": 7926
        }
    ], 
    "_id": null, 
    "cheesecake_installability_id": null
}