# CeWLeR - Custom Word List generator Redefined
_CeWLeR_ crawls from a specified URL and collects words to create a custom wordlist.
It's a great tool for security testers and bug bounty hunters. The lists can be used for password cracking, subdomain enumeration, directory and file brute forcing, API endpoint discovery, etc. It's good to have an additional target specific wordlist that is different than what everybody else use.
_CeWLeR_ was sort of originally inspired by the really nice tool [CeWL](https://github.com/digininja/CeWL). I had some challenges with _CeWL_ on a site I wanted a wordlist from, but without any Ruby experience I didn't know how to contribute or work around it. So instead I created a custom wordlist generator in Python to get the job done.
## At a glance
<img src="https://github.com/roys/cewler/blob/main/misc/demo.gif?raw=true" width="800" />
## Features
- Generates custom wordlists by scraping words from web sites
- A lot of options:
- Output to screen or file
- Can stay within subdomain, or visit sibling and child subdomains, or visit anything within the same top domain
- Can stay within a certain depth of a website
- Speed can be controlled
- Word length and casing can be configured
- JavaScript and CSS can be included
- Text can be extracted from PDF files (using [pypdf](https://pypi.org/project/pypdf/))
- Crawled URLs can be output to separate file
- Scraped e-mail addresses can also be output to separate file
- ++
- Using the excellent [Scrapy](https://scrapy.org) framework for scraping and using the beautiful [rich](https://github.com/Textualize/rich) library for terminal output
## Commands and options
### Quick examples
#### Output to file
Will output to file unless a file is specified.
`cewler --output wordlist.txt https://example.com`
#### Control speed and depth
The rate is specified in requests per second. Please play nicely and don't don't break any rules.
`cewler --output wordlist.txt --rate 5 --depth 2 https://example.com`
#### Change User-Agent header
The default User-Agent is a common browser.
`cewler --output wordlist.txt --user-agent "Cewler" https://example.com`
#### Control casing, word length and characters
Unless specified the words will have mixed case and be of at least 5 in length.
`cewler --output wordlist.txt --lowercase --min-word-length 2 --without-numbers https://example.com`
#### Visit all domains - including parent, children and siblings
The default is to just visit exactly the same (sub)domain as specified.
`cewler --output wordlist.txt -s all https://example.com`
#### Visit same (sub)domain + any belonging child subdomains
`cewler --output wordlist.txt -s children https://example.com`
#### Include JavaScript and/or CSS
If you want you can include links from `<script>` and `<link>` tags, plus words from within JavaScript and CSS.
`cewler --output wordlist.txt --include-js --include-css https://example.com`
#### Include PDF files
It's easy to extract text from PDF files as well.
`cewler --output wordlist.txt --include-pdf https://example.com`
#### Output visited URLs to file
It's also possible to store the crawled files to a file.
`cewler --output wordlist.txt --output-urls urls.txt https://example.com`
#### Output e-mails to file
It's also possible to store the scraped e-mail addresses to a separate file (they are always added to the wordlist).
`cewler --output wordlist.txt --output-emails emails.txt https://example.com`
#### Ninja trick π₯·
If it just takes too long to crawl a site you can press `ctrl + c` once(!) and wait while the spider finishes the current requests and then whatever words have been found so far is stored to the output file.
### All options
```
cewler -h
usage: cewler [-h] [-d DEPTH] [-js] [-l] [-m MIN_WORD_LENGTH] [-o OUTPUT] [-ou OUTPUT_URLS] [-r RATE] [-s {all,children,exact}] [--stream] [-u USER_AGENT] [-v] [-w] url
CeWLeR - Custom Word List generator Redefined
positional arguments:
url URL to start crawling from
options:
-h, --help show this help message and exit
-d DEPTH, --depth DEPTH
directory path depth to crawl, 0 for unlimited (default: 2)
-css, --include-css include CSS from external files and <style> tags
-js, --include-js include JavaScript from external files and <script> tags
-pdf, --include-pdf include text from PDF files
-l, --lowercase lowercase all parsed words
-m MIN_WORD_LENGTH, --min-word-length MIN_WORD_LENGTH
-o OUTPUT, --output OUTPUT
file were to stream and store wordlist instead of screen (default: screen)
-oe OUTPUT_EMAILS, --output-emails OUTPUT_EMAILS
file were to stream and store e-mail addresses found (they will always be outputted in the wordlist)
-ou OUTPUT_URLS, --output-urls OUTPUT_URLS
file were to stream and store URLs visited (default: not outputted)
-r RATE, --rate RATE requests per second (default: 20)
-s {all,children,exact}, --subdomain_strategy {all,children,exact}
allow crawling [all] domains, including children and siblings, only [exact] the same (sub)domain (default), or same domain and any belonging [children]
--stream writes to file after each request (may produce duplicates because of threading) (default: false)
-u USER_AGENT, --user-agent USER_AGENT
User-Agent header to send (default: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36)
-v, --verbose A bit more detailed output
-w, --without-numbers
ignore words are numbers or contain numbers
```
### Subdomain strategies
Example URL to scan `https://sub.example.com`:
| | `-s exact`* | `-s children` | `-s all` |
| --- | --- | --- | --- |
| `sub.example.com` | β
| β
| β
|
| `child.sub.example.com` | β | β
| β
|
| `sibling.example.com` | β | β | β
|
| `example.com` | β | β | β
|
\* Default strategy
### Digging into the code
If you want to do some tweaking you yourself you can probably find what you want in [src/cewler/constants.py](src/cewler/constants.py) and [src/cewler/spider.py](src/cewler/spider.py)
## Installation and upgrade
### Alternative 1 - installing from PyPI
Package homepage: https://pypi.org/project/cewler/
`python3 -m pip install cewler`
#### Upgrade
`python3 -m pip install cewler --upgrade`
### Alternative 2 - installing from GitHub
#### 1. Clone repository
`git clone https://github.com/roys/cewler.git --depth 1`
#### 2. Install dependencies
```
cd cewler
python3 -m pip install -r requirements.txt
```
#### 3. Shortcut on Un*x based system (optional)
```
cd src/cewler
chmod +x cewler.py
ln -s $(pwd)/cewler.py /usr/local/bin/cewler
cewler -h
```
#### Upgrade
`git pull`
## Pronunciation
_CeWLeR_ is pronounced _"cooler"_.
## Contributors
A huge thank you to everyone who has contributed to making CeWLeR better! Your contributions, big and small, make a significant difference.
Contributions of any kind are welcome and recognized. From bug reports to coding, documentation to design, every effort is appreciated:
- [Chris Dale](https://github.com/ChrisAD) - for testing, bug reporting and fixing
- [Mathies Svarrer-LanthΓ©n](https://github.com/seihtam) - for adding support for PDF extraction
## License
[Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](LICENSE)
Raw data
{
"_id": null,
"home_page": null,
"name": "cewler",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "custom wordlist, wordlist generator, bug bounty hunting, security testing, pen testing, crawler, spider",
"author": null,
"author_email": "Roy Solberg <pypi-0q238r@roysolberg.com>",
"download_url": "https://files.pythonhosted.org/packages/49/fc/832f7696ab32a185b09dabb27c65612af0bda5485ecb41135d8d22c435c4/cewler-1.2.0.post1.tar.gz",
"platform": null,
"description": "# CeWLeR - Custom Word List generator Redefined\n_CeWLeR_ crawls from a specified URL and collects words to create a custom wordlist.\n\nIt's a great tool for security testers and bug bounty hunters. The lists can be used for password cracking, subdomain enumeration, directory and file brute forcing, API endpoint discovery, etc. It's good to have an additional target specific wordlist that is different than what everybody else use.\n\n_CeWLeR_ was sort of originally inspired by the really nice tool [CeWL](https://github.com/digininja/CeWL). I had some challenges with _CeWL_ on a site I wanted a wordlist from, but without any Ruby experience I didn't know how to contribute or work around it. So instead I created a custom wordlist generator in Python to get the job done.\n\n## At a glance\n<img src=\"https://github.com/roys/cewler/blob/main/misc/demo.gif?raw=true\" width=\"800\" />\n\n## Features\n- Generates custom wordlists by scraping words from web sites\n- A lot of options:\n - Output to screen or file\n - Can stay within subdomain, or visit sibling and child subdomains, or visit anything within the same top domain\n - Can stay within a certain depth of a website\n - Speed can be controlled\n - Word length and casing can be configured\n - JavaScript and CSS can be included\n - Text can be extracted from PDF files (using [pypdf](https://pypi.org/project/pypdf/))\n - Crawled URLs can be output to separate file\n - Scraped e-mail addresses can also be output to separate file\n - ++\n- Using the excellent [Scrapy](https://scrapy.org) framework for scraping and using the beautiful [rich](https://github.com/Textualize/rich) library for terminal output\n\n## Commands and options\n### Quick examples\n#### Output to file\nWill output to file unless a file is specified. \n`cewler --output wordlist.txt https://example.com` \n\n#### Control speed and depth\nThe rate is specified in requests per second. Please play nicely and don't don't break any rules. \n`cewler --output wordlist.txt --rate 5 --depth 2 https://example.com` \n\n#### Change User-Agent header\nThe default User-Agent is a common browser. \n`cewler --output wordlist.txt --user-agent \"Cewler\" https://example.com` \n\n#### Control casing, word length and characters\nUnless specified the words will have mixed case and be of at least 5 in length. \n`cewler --output wordlist.txt --lowercase --min-word-length 2 --without-numbers https://example.com` \n\n#### Visit all domains - including parent, children and siblings\nThe default is to just visit exactly the same (sub)domain as specified. \n`cewler --output wordlist.txt -s all https://example.com` \n\n#### Visit same (sub)domain + any belonging child subdomains\n`cewler --output wordlist.txt -s children https://example.com` \n\n#### Include JavaScript and/or CSS\nIf you want you can include links from `<script>` and `<link>` tags, plus words from within JavaScript and CSS. \n`cewler --output wordlist.txt --include-js --include-css https://example.com` \n\n#### Include PDF files\nIt's easy to extract text from PDF files as well. \n`cewler --output wordlist.txt --include-pdf https://example.com` \n\n#### Output visited URLs to file\nIt's also possible to store the crawled files to a file.\n`cewler --output wordlist.txt --output-urls urls.txt https://example.com` \n\n#### Output e-mails to file\nIt's also possible to store the scraped e-mail addresses to a separate file (they are always added to the wordlist).\n`cewler --output wordlist.txt --output-emails emails.txt https://example.com` \n\n#### Ninja trick \ud83e\udd77\nIf it just takes too long to crawl a site you can press `ctrl + c` once(!) and wait while the spider finishes the current requests and then whatever words have been found so far is stored to the output file.\n\n### All options\n```\ncewler -h\nusage: cewler [-h] [-d DEPTH] [-js] [-l] [-m MIN_WORD_LENGTH] [-o OUTPUT] [-ou OUTPUT_URLS] [-r RATE] [-s {all,children,exact}] [--stream] [-u USER_AGENT] [-v] [-w] url\n\nCeWLeR - Custom Word List generator Redefined\n\npositional arguments:\n url URL to start crawling from\n\noptions:\n -h, --help show this help message and exit\n -d DEPTH, --depth DEPTH\n directory path depth to crawl, 0 for unlimited (default: 2)\n -css, --include-css include CSS from external files and <style> tags\n -js, --include-js include JavaScript from external files and <script> tags\n -pdf, --include-pdf include text from PDF files\n -l, --lowercase lowercase all parsed words\n -m MIN_WORD_LENGTH, --min-word-length MIN_WORD_LENGTH\n -o OUTPUT, --output OUTPUT\n file were to stream and store wordlist instead of screen (default: screen)\n -oe OUTPUT_EMAILS, --output-emails OUTPUT_EMAILS\n file were to stream and store e-mail addresses found (they will always be outputted in the wordlist)\n -ou OUTPUT_URLS, --output-urls OUTPUT_URLS\n file were to stream and store URLs visited (default: not outputted)\n -r RATE, --rate RATE requests per second (default: 20)\n -s {all,children,exact}, --subdomain_strategy {all,children,exact}\n allow crawling [all] domains, including children and siblings, only [exact] the same (sub)domain (default), or same domain and any belonging [children]\n --stream writes to file after each request (may produce duplicates because of threading) (default: false)\n -u USER_AGENT, --user-agent USER_AGENT\n User-Agent header to send (default: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36)\n -v, --verbose A bit more detailed output\n -w, --without-numbers\n ignore words are numbers or contain numbers\n```\n\n### Subdomain strategies\n\nExample URL to scan `https://sub.example.com`:\n\n| | `-s exact`* | `-s children` | `-s all` |\n| --- | --- | --- | --- |\n| `sub.example.com` | \u2705 | \u2705 | \u2705 |\n| `child.sub.example.com` | \u274c | \u2705 | \u2705 |\n| `sibling.example.com` | \u274c | \u274c | \u2705 |\n| `example.com` | \u274c | \u274c | \u2705 |\n\\* Default strategy\n\n### Digging into the code\nIf you want to do some tweaking you yourself you can probably find what you want in [src/cewler/constants.py](src/cewler/constants.py) and [src/cewler/spider.py](src/cewler/spider.py)\n\n## Installation and upgrade\n### Alternative 1 - installing from PyPI\nPackage homepage: https://pypi.org/project/cewler/\n\n`python3 -m pip install cewler`\n\n#### Upgrade\n`python3 -m pip install cewler --upgrade`\n\n### Alternative 2 - installing from GitHub\n#### 1. Clone repository\n`git clone https://github.com/roys/cewler.git --depth 1`\n\n#### 2. Install dependencies\n```\ncd cewler\npython3 -m pip install -r requirements.txt\n```\n\n#### 3. Shortcut on Un*x based system (optional)\n```\ncd src/cewler\nchmod +x cewler.py\nln -s $(pwd)/cewler.py /usr/local/bin/cewler\ncewler -h\n```\n\n#### Upgrade\n`git pull`\n\n## Pronunciation\n_CeWLeR_ is pronounced _\"cooler\"_.\n\n## Contributors\nA huge thank you to everyone who has contributed to making CeWLeR better! Your contributions, big and small, make a significant difference.\n\nContributions of any kind are welcome and recognized. From bug reports to coding, documentation to design, every effort is appreciated:\n - [Chris Dale](https://github.com/ChrisAD) - for testing, bug reporting and fixing\n - [Mathies Svarrer-Lanth\u00e9n](https://github.com/seihtam) - for adding support for PDF extraction\n\n## License\n[Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](LICENSE)\n",
"bugtrack_url": null,
"license": null,
"summary": "Custom Word List generator Redefined",
"version": "1.2.0.post1",
"project_urls": {
"Bug Tracker": "https://github.com/roys/cewler/issues",
"Changelog": "https://github.com/roys/cewler/blob/main/CHANGELOG.md",
"Homepage": "https://github.com/roys/cewler"
},
"split_keywords": [
"custom wordlist",
" wordlist generator",
" bug bounty hunting",
" security testing",
" pen testing",
" crawler",
" spider"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7064b28097093dce78a57da7936fd2ff8fca86fb8034261c6e303d28e3af3ff4",
"md5": "1f563094457dace600a9ff21b05b3526",
"sha256": "c24c60fc152e71cf27a47032231f2d005732d2d9bf9e262b76b7a8c342a6fcf3"
},
"downloads": -1,
"filename": "cewler-1.2.0.post1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1f563094457dace600a9ff21b05b3526",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 18610,
"upload_time": "2024-07-28T20:36:07",
"upload_time_iso_8601": "2024-07-28T20:36:07.078152Z",
"url": "https://files.pythonhosted.org/packages/70/64/b28097093dce78a57da7936fd2ff8fca86fb8034261c6e303d28e3af3ff4/cewler-1.2.0.post1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "49fc832f7696ab32a185b09dabb27c65612af0bda5485ecb41135d8d22c435c4",
"md5": "2de66595783259ec9e5dd1ed655c030c",
"sha256": "3aebb66a4290f701d6e887ced365213e46567b6c76938485ecb3909d2c11b6d8"
},
"downloads": -1,
"filename": "cewler-1.2.0.post1.tar.gz",
"has_sig": false,
"md5_digest": "2de66595783259ec9e5dd1ed655c030c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 20960,
"upload_time": "2024-07-28T20:36:12",
"upload_time_iso_8601": "2024-07-28T20:36:12.614351Z",
"url": "https://files.pythonhosted.org/packages/49/fc/832f7696ab32a185b09dabb27c65612af0bda5485ecb41135d8d22c435c4/cewler-1.2.0.post1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-28 20:36:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "roys",
"github_project": "cewler",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pypdf",
"specs": [
[
"==",
"4.0.1"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"13.3.1"
]
]
},
{
"name": "Scrapy",
"specs": [
[
"==",
"2.8.0"
]
]
},
{
"name": "tld",
"specs": [
[
"==",
"0.12.7"
]
]
},
{
"name": "Twisted",
"specs": [
[
"==",
"22.10.0"
]
]
}
],
"lcname": "cewler"
}