<img src="https://github.com/user-attachments/assets/25912aba-690a-45e2-a6a9-2b0445e8218f" width="600"/>
[](https://www.python.org) [](https://github.com/blacklanternsecurity/webcap/blob/dev/LICENSE) [](https://github.com/astral-sh/ruff) [](https://github.com/blacklanternsecurity/webcap/actions?query=workflow%3A"tests") [](https://codecov.io/gh/blacklanternsecurity/webcap) [](https://discord.com/invite/PZqkgxu5SA)
**WebCap** is an extremely lightweight web screenshot tool. It doesn't require Selenium, Playwright, Puppeteer, or any other browser automation framework; all it needs is a working Chrome installation. Used by [BBOT](https://github.com/blacklanternsecurity/bbot).
### Installation
```bash
pipx install webcap
```
### Web Interface (`webcap server`)
https://github.com/user-attachments/assets/a5dea3fb-fa01-41e7-90cd-67c6efa3d6e5
### Features
WebCap's most unique feature is its ability to capture not only the **fully-rendered DOM**, but also every snippet of **parsed Javascript** (regardless of inline or external), and the **full content** of every HTTP request + response (including Javascript API calls etc.). For convenience, it can output directly to JSON.
### Example Commands
#### Scanning
```bash
# Capture screenshots of all URLs in urls.txt
webcap scan urls.txt -o ./my_screenshots
# Output to JSON, and include the fully-rendered DOM
webcap scan urls.txt --json --dom | jq
# Capture requests and responses
webcap scan urls.txt --json --requests --responses | jq
# Capture javascript
webcap scan urls.txt --json --javascript | jq
# Extract text from screenshots
webcap scan urls.txt --json --ocr | jq
```
#### Server
```bash
# Start the server
webcap server
# Browse to http://localhost:8000
```
# Screenshots
### CLI Interface (`webcap scan`)

#### Fully-rendered DOM

#### Javascript Capture

#### Requests + Responses

#### OCR

### Full feature list
- [x] Blazing fast screenshots
- [x] Fullscreen capture (entire scrollable page)
- [x] JSON output
- [x] Full DOM extraction
- [x] Javascript extraction (inline + external)
- [ ] Javascript extraction (environment dump)
- [x] Full network logs (incl. request/response bodies)
- [x] Title
- [x] Status code
- [x] Fuzzy (perception) hashing
- [ ] Technology detection
- [x] OCR text extraction
- [x] Web interface
### Webcap as a Python library
```python
import base64
from webcap import Browser
async def main():
# create a browser instance
browser = Browser()
# start the browser
await browser.start()
# take a screenshot
webscreenshot = await browser.screenshot("http://example.com")
# save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(webscreenshot.blob)
# stop the browser
await browser.stop()
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
## CLI Usage (--help)
```
Usage: webcap scan [OPTIONS] URLS
Screenshot URLs
╭─ Arguments ────────────────────────────────────────────────────────────────────────────────╮
│ * urls TEXT URL(s) to capture, or file(s) containing URLs [default: None] │
│ [required] │
╰────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────────╮
│ --json -j Output JSON │
│ --chrome -c TEXT Path to Chrome executable [default: None] │
│ --output -o OUTPUT_DIR Output directory │
│ [default: /home/bls/Downloads/code/webcap/screenshots] │
│ --help Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Screenshots ──────────────────────────────────────────────────────────────────────────────╮
│ --resolution -r RESOLUTION Resolution to capture [default: 1440x900] │
│ --full-page -f Capture the full page (larger resolution images) │
│ --no-screenshots Only visit the sites; don't capture screenshots │
│ (useful with -j/--json) │
╰────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Performance ──────────────────────────────────────────────────────────────────────────────╮
│ --threads -t INTEGER Number of threads to use [default: 15] │
│ --timeout -T INTEGER Timeout before giving up on a web request [default: 10] │
│ --delay SECONDS Delay before capturing [default: 3.0] │
╰────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ HTTP ─────────────────────────────────────────────────────────────────────────────────────╮
│ --user-agent -U TEXT User agent to use │
│ [default: Mozilla/5.0 (Windows NT 10.0; Win64; x64) │
│ AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 │
│ Safari/537.36] │
│ --headers -H TEXT Additional headers to send in format: 'Header-Name: │
│ Header-Value' (multiple supported) │
│ --proxy -p TEXT HTTP proxy to use [default: None] │
╰────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ JSON (Only apply when -j/--json is used) ─────────────────────────────────────────────────╮
│ --base64 -b Output each screenshot as base64 │
│ --dom -d Capture the fully-rendered DOM │
│ --responses -rs Capture the full body of each HTTP response │
│ (including API calls etc.) │
│ --requests -rq Capture the full body of each HTTP request │
│ (including API calls etc.) │
│ --javascript -J Capture every snippet of Javascript (inline + │
│ external) │
│ --ignore-types TEXT Ignore these filetypes │
│ [default: Image, Media, Font, Stylesheet] │
│ --ocr --no-ocr Extract text from screenshots [default: no-ocr] │
╰────────────────────────────────────────────────────────────────────────────────────────────╯
```
Raw data
{
"_id": null,
"home_page": null,
"name": "webcap",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.9",
"maintainer_email": null,
"keywords": "python, cli, web, screenshot, osint, ocr, bug-bounty, bugbounty, headless, web-browser, scanner, python-library, hacking, pentesting, recon",
"author": "TheTechromancer",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/11/98/c3ccffd7c9170281e382a54c9f54c458f785a6f4e73045de7effa87e3e5e/webcap-0.1.88.tar.gz",
"platform": null,
"description": "<img src=\"https://github.com/user-attachments/assets/25912aba-690a-45e2-a6a9-2b0445e8218f\" width=\"600\"/>\n\n[](https://www.python.org) [](https://github.com/blacklanternsecurity/webcap/blob/dev/LICENSE) [](https://github.com/astral-sh/ruff) [](https://github.com/blacklanternsecurity/webcap/actions?query=workflow%3A\"tests\") [](https://codecov.io/gh/blacklanternsecurity/webcap) [](https://discord.com/invite/PZqkgxu5SA)\n\n**WebCap** is an extremely lightweight web screenshot tool. It doesn't require Selenium, Playwright, Puppeteer, or any other browser automation framework; all it needs is a working Chrome installation. Used by [BBOT](https://github.com/blacklanternsecurity/bbot).\n\n### Installation\n\n```bash\npipx install webcap\n```\n\n### Web Interface (`webcap server`)\n\nhttps://github.com/user-attachments/assets/a5dea3fb-fa01-41e7-90cd-67c6efa3d6e5\n\n### Features\n\nWebCap's most unique feature is its ability to capture not only the **fully-rendered DOM**, but also every snippet of **parsed Javascript** (regardless of inline or external), and the **full content** of every HTTP request + response (including Javascript API calls etc.). For convenience, it can output directly to JSON.\n\n### Example Commands\n\n#### Scanning\n\n```bash\n# Capture screenshots of all URLs in urls.txt\nwebcap scan urls.txt -o ./my_screenshots\n\n# Output to JSON, and include the fully-rendered DOM\nwebcap scan urls.txt --json --dom | jq\n\n# Capture requests and responses\nwebcap scan urls.txt --json --requests --responses | jq\n\n# Capture javascript\nwebcap scan urls.txt --json --javascript | jq\n\n# Extract text from screenshots\nwebcap scan urls.txt --json --ocr | jq\n```\n\n#### Server\n\n```bash\n# Start the server\nwebcap server\n\n# Browse to http://localhost:8000\n```\n\n# Screenshots\n\n### CLI Interface (`webcap scan`)\n\n\n\n#### Fully-rendered DOM\n\n\n\n#### Javascript Capture\n\n\n\n#### Requests + Responses\n\n\n\n#### OCR\n\n\n\n### Full feature list\n\n- [x] Blazing fast screenshots\n- [x] Fullscreen capture (entire scrollable page)\n- [x] JSON output\n- [x] Full DOM extraction\n- [x] Javascript extraction (inline + external)\n- [ ] Javascript extraction (environment dump)\n- [x] Full network logs (incl. request/response bodies)\n- [x] Title\n- [x] Status code\n- [x] Fuzzy (perception) hashing\n- [ ] Technology detection\n- [x] OCR text extraction\n- [x] Web interface\n\n\n### Webcap as a Python library\n\n```python\nimport base64\nfrom webcap import Browser\n\nasync def main():\n # create a browser instance\n browser = Browser()\n # start the browser\n await browser.start()\n # take a screenshot\n webscreenshot = await browser.screenshot(\"http://example.com\")\n # save the screenshot to a file\n with open(\"screenshot.png\", \"wb\") as f:\n f.write(webscreenshot.blob)\n # stop the browser\n await browser.stop()\n\nif __name__ == \"__main__\":\n import asyncio\n asyncio.run(main())\n```\n\n## CLI Usage (--help)\n\n```\n Usage: webcap scan [OPTIONS] URLS \n \n Screenshot URLs \n \n\u256d\u2500 Arguments \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 * urls TEXT URL(s) to capture, or file(s) containing URLs [default: None] \u2502\n\u2502 [required] \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --json -j Output JSON \u2502\n\u2502 --chrome -c TEXT Path to Chrome executable [default: None] \u2502\n\u2502 --output -o OUTPUT_DIR Output directory \u2502\n\u2502 [default: /home/bls/Downloads/code/webcap/screenshots] \u2502\n\u2502 --help Show this message and exit. \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Screenshots \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --resolution -r RESOLUTION Resolution to capture [default: 1440x900] \u2502\n\u2502 --full-page -f Capture the full page (larger resolution images) \u2502\n\u2502 --no-screenshots Only visit the sites; don't capture screenshots \u2502\n\u2502 (useful with -j/--json) \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Performance \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --threads -t INTEGER Number of threads to use [default: 15] \u2502\n\u2502 --timeout -T INTEGER Timeout before giving up on a web request [default: 10] \u2502\n\u2502 --delay SECONDS Delay before capturing [default: 3.0] \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 HTTP \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --user-agent -U TEXT User agent to use \u2502\n\u2502 [default: Mozilla/5.0 (Windows NT 10.0; Win64; x64) \u2502\n\u2502 AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 \u2502\n\u2502 Safari/537.36] \u2502\n\u2502 --headers -H TEXT Additional headers to send in format: 'Header-Name: \u2502\n\u2502 Header-Value' (multiple supported) \u2502\n\u2502 --proxy -p TEXT HTTP proxy to use [default: None] \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 JSON (Only apply when -j/--json is used) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --base64 -b Output each screenshot as base64 \u2502\n\u2502 --dom -d Capture the fully-rendered DOM \u2502\n\u2502 --responses -rs Capture the full body of each HTTP response \u2502\n\u2502 (including API calls etc.) \u2502\n\u2502 --requests -rq Capture the full body of each HTTP request \u2502\n\u2502 (including API calls etc.) \u2502\n\u2502 --javascript -J Capture every snippet of Javascript (inline + \u2502\n\u2502 external) \u2502\n\u2502 --ignore-types TEXT Ignore these filetypes \u2502\n\u2502 [default: Image, Media, Font, Stylesheet] \u2502\n\u2502 --ocr --no-ocr Extract text from screenshots [default: no-ocr] \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n```\n\n",
"bugtrack_url": null,
"license": "GPL-v3.0",
"summary": "An ultra-lightweight web screenshot tool written in Python",
"version": "0.1.88",
"project_urls": {
"Homepage": "https://github.com/blacklanternsecurity/webcap",
"Repository": "https://github.com/blacklanternsecurity/webcap"
},
"split_keywords": [
"python",
" cli",
" web",
" screenshot",
" osint",
" ocr",
" bug-bounty",
" bugbounty",
" headless",
" web-browser",
" scanner",
" python-library",
" hacking",
" pentesting",
" recon"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a6f3b75b005997f6425a1c4385a518162c20b044bfca135638d8ffd7e683b9f6",
"md5": "a645a684d2cf5410c19cd86448c4e7fe",
"sha256": "23779d69d06b8b0db8e13896253e911caa98450889c12f57b11a88ebab6b56de"
},
"downloads": -1,
"filename": "webcap-0.1.88-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a645a684d2cf5410c19cd86448c4e7fe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.9",
"size": 103519,
"upload_time": "2025-03-13T15:05:10",
"upload_time_iso_8601": "2025-03-13T15:05:10.478873Z",
"url": "https://files.pythonhosted.org/packages/a6/f3/b75b005997f6425a1c4385a518162c20b044bfca135638d8ffd7e683b9f6/webcap-0.1.88-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1198c3ccffd7c9170281e382a54c9f54c458f785a6f4e73045de7effa87e3e5e",
"md5": "514a5dff5f1c009a666deec39d5f5cac",
"sha256": "9baa89f7f54c94a14a922885d41980e1e10bf0ac94428479b470ad73aec530d5"
},
"downloads": -1,
"filename": "webcap-0.1.88.tar.gz",
"has_sig": false,
"md5_digest": "514a5dff5f1c009a666deec39d5f5cac",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.9",
"size": 102033,
"upload_time": "2025-03-13T15:05:11",
"upload_time_iso_8601": "2025-03-13T15:05:11.589539Z",
"url": "https://files.pythonhosted.org/packages/11/98/c3ccffd7c9170281e382a54c9f54c458f785a6f4e73045de7effa87e3e5e/webcap-0.1.88.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-13 15:05:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "blacklanternsecurity",
"github_project": "webcap",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "webcap"
}