zapgpt


Namezapgpt JSON
Version 3.1.1 PyPI version JSON
download
home_pageNone
SummaryA command-line tool for interacting with various LLM providers
upload_time2025-07-13 11:25:02
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2025 Amit Agarwal Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords ai llm gpt openai anthropic mistral cli chat terminal
VCS
bugtrack_url
requirements openai Requests tabulate tiktoken rich pygments httpx rich-argparse importlib_resources
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # zapgpt

A minimalist CLI tool to chat with LLMs from your terminal. Supports multiple providers including OpenAI, OpenRouter, Together, Replicate, DeepInfra, and GitHub AI.

███████╗ █████╗ ██████╗  ██████╗ ██████╗ ████████╗
╚══███╔╝██╔══██╗██╔══██╗██╔════╝ ██╔══██╗╚══██╔══╝
  ███╔╝ ███████║██████╔╝██║  ███╗██████╔╝   ██║
 ███╔╝  ██╔══██║██╔═══╝ ██║   ██║██╔═══╝    ██║
███████╗██║  ██║██║     ╚██████╔╝██║        ██║
╚══════╝╚═╝  ╚═╝╚═╝      ╚═════╝ ╚═╝        ╚═╝
         GPT on the CLI. Like a boss.

`zapgpt` is a minimalist CLI tool to chat with LLMs from your terminal. No bloated UI, just fast raw GPT magic, straight from the shell. With pre-cooked system prompt for Ethical hacking, code, file attachment and a good default prompt and usage tracking, I hope you find it useful. No extra features or frills. Modify it as you need it with a simple one file script.

Updated to version v2.

[![Introduction](https://i.ytimg.com/vi/hpiVtj_gSD4/hqdefault.jpg)](https://www.youtube.com/watch?v=hpiVtj_gSD4)

## 💾 Requirements

* Python 3.8+
* `uv` (recommended - blazingly fast Python package manager)
* pip (alternative to uv)

## 🚀 Installation

### Option 1: Install with `uv` (⚡ Recommended)

```bash
uv tool install zapgpt
```

> **Why uv?** `uv` is blazingly fast and handles CLI tools perfectly. It installs zapgpt globally and manages dependencies automatically.

**Don't have uv?** Install it first:
```bash
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

# Or with pip
pip install uv
```

### Option 2: Install from PyPI

```bash
uv tool install zapgpt
```

### Option 3: Development Installation

**With uv (recommended):**
```bash
git clone https://github.com/yourusername/zapgpt.git
cd zapgpt
uv sync
uv run zapgpt "test"

# Optional: Set up pre-commit hooks for code quality
./setup-pre-commit.sh
```

**With pip:**
```bash
git clone https://github.com/yourusername/zapgpt.git
cd zapgpt
pip install -e .
```

### Option 4: From Source (Classic method)

```bash
git clone https://github.com/yourusername/zapgpt.git
cd zapgpt
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```

## 🔑 Environment Variables

ZapGPT only requires the API key for the provider you're using. Set the appropriate environment variable:

| Provider | Environment Variable | Get API Key |
|----------|---------------------|-------------|
| OpenAI | `OPENAI_API_KEY` | [platform.openai.com](https://platform.openai.com/account/api-keys) |
| OpenRouter | `OPENROUTER_KEY` | [openrouter.ai](https://openrouter.ai/keys) |
| Together | `TOGETHER_API_KEY` | [api.together.xyz](https://api.together.xyz/settings/api-keys) |
| Replicate | `REPLICATE_API_TOKEN` | [replicate.com](https://replicate.com/account/api-tokens) |
| DeepInfra | `DEEPINFRA_API_TOKEN` | [deepinfra.com](https://deepinfra.com/dash/api_keys) |
| GitHub | `GITHUB_KEY` | [github.com](https://github.com/settings/tokens) |

**Example:**
```bash
# For OpenAI (default provider)
export OPENAI_API_KEY="your-openai-api-key-here"

# For OpenRouter
export OPENROUTER_KEY="your-openrouter-key-here"
```

## 🧠 Usage

After installation, you can use `zapgpt` directly from the command line:

```bash
# Basic usage (uses OpenAI by default)
zapgpt "What's the meaning of life?"

# Use different providers
zapgpt --provider openrouter "Explain quantum computing"
zapgpt --provider together "Write a Python function"
zapgpt --provider github "Debug this code"

# Use specific models
zapgpt -m gpt-4o "Complex reasoning task"
zapgpt --provider openrouter -m anthropic/claude-3.5-sonnet "Creative writing"
```

### Interactive Mode

```bash
zapgpt  # Starts interactive mode
```

### Development Usage

**With uv:**
```bash
uv run zapgpt "Your question here"
```

**With Python:**
```bash
python -m zapgpt "Your question here"
# or
python zapgpt/main.py "Your question here"
```

### Quiet Mode (for Scripting)

```bash
# Suppress all output except the LLM response
zapgpt --quiet "What is the capital of France?"

# Perfect for shell scripts
RESPONSE=$(zapgpt -q "Summarize this in one word: Machine Learning")
echo "Result: $RESPONSE"
```

### File Input (for Automation)

```bash
# Send file contents to LLM
zapgpt --file /path/to/file.txt "Analyze this log file"

# Analyze command output
nmap -sV target.com > scan_results.txt
zapgpt -f scan_results.txt --use-prompt vuln_assessment "Analyze these scan results"

# Process multiple files
for file in *.log; do
    zapgpt -q -f "$file" "Summarize security events" >> summary.txt
done
```

### Automation Examples

```bash
# Penetration Testing Agent
#!/bin/bash
TARGET="example.com"

# 1. Reconnaissance
nmap -sV $TARGET > nmap_results.txt
RESPONSE=$(zapgpt -q -f nmap_results.txt --use-prompt vuln_assessment "Identify potential vulnerabilities")
echo "Vulnerabilities found: $RESPONSE"

# 2. Web Analysis
nikto -h $TARGET > nikto_results.txt
zapgpt -f nikto_results.txt "Prioritize these web vulnerabilities" > web_analysis.txt

# 3. Generate Report
zapgpt -q "Create executive summary" -f web_analysis.txt > final_report.md
```

```bash
# Log Analysis Agent
#!/bin/bash
# Monitor and analyze system logs
tail -n 100 /var/log/auth.log > recent_auth.log
ALERT=$(zapgpt -q -f recent_auth.log "Detect suspicious login attempts")

if [[ $ALERT == *"suspicious"* ]]; then
    echo "Security Alert: $ALERT" | mail -s "Security Alert" admin@company.com
fi
```

```bash
# Code Review Agent
#!/bin/bash
# Automated code review
for file in src/*.py; do
    REVIEW=$(zapgpt -q -f "$file" --use-prompt coding "Review this code for security issues")
    echo "File: $file" >> code_review.md
    echo "Review: $REVIEW" >> code_review.md
    echo "---" >> code_review.md
done
```

## 🐍 Programmatic API

ZapGPT can be imported and used in your Python scripts:

### Basic Usage

```python
from zapgpt import query_llm

# Simple query
response = query_llm("What is Python?", provider="openai")
print(response)

# With different provider
response = query_llm(
    "Explain quantum computing",
    provider="openrouter",
    model="anthropic/claude-3.5-sonnet"
)
```

### Advanced Usage

```python
from zapgpt import query_llm

# Use predefined prompts
code_review = query_llm(
    "Review this Python function: def hello(): print('hi')",
    provider="openai",
    use_prompt="coding",
    model="gpt-4o"
)

# Custom system prompt
response = query_llm(
    "Write a haiku about programming",
    provider="openai",
    system_prompt="You are a poetic programming mentor.",
    temperature=0.8
)

# Error handling
try:
    response = query_llm("Hello", provider="openai")
except EnvironmentError as e:
    print(f"Missing API key: {e}")
except ValueError as e:
    print(f"Invalid provider: {e}")
```

### API Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `prompt` | str | Required | Your question/prompt |
| `provider` | str | "openai" | LLM provider to use |
| `model` | str | None | Specific model (overrides prompt default) |
| `system_prompt` | str | None | Custom system prompt |
| `use_prompt` | str | None | Use predefined prompt template |
| `temperature` | float | 0.3 | Response randomness (0.0-1.0) |
| `max_tokens` | int | 4096 | Maximum response length |
| `quiet` | bool | True | Suppress logging output |

### Environment Variables

Set the appropriate API key for your chosen provider:

```python
import os
os.environ['OPENAI_API_KEY'] = 'your-key-here'

from zapgpt import query_llm
response = query_llm("Hello world", provider="openai")
```

### Python Automation Examples

```python
# Penetration Testing Automation
import subprocess
from zapgpt import query_llm

def analyze_nmap_scan(target):
    # Run nmap scan
    result = subprocess.run(['nmap', '-sV', target], capture_output=True, text=True)

    # Analyze with LLM
    analysis = query_llm(
        f"Analyze this nmap scan: {result.stdout}",
        provider="openai",
        use_prompt="vuln_assessment"
    )
    return analysis

vulns = analyze_nmap_scan("example.com")
print(f"Vulnerabilities: {vulns}")
```

```python
# Log Analysis Agent
from zapgpt import query_llm

def monitor_logs(log_file):
    with open(log_file, 'r') as f:
        logs = f.read()

    alert = query_llm(
        f"Detect suspicious activity: {logs}",
        provider="openai",
        quiet=True
    )

    if "suspicious" in alert.lower():
        print(f"ALERT: {alert}")
        return True
    return False

# Monitor auth logs
monitor_logs('/var/log/auth.log')
```

[![Using zapgpt for pentesting on Kali](https://i.ytimg.com/vi/vDTwIsEUheE/hqdefault.jpg)](https://www.youtube.com/watch?v=hpiVtj_gSD4)


## 🛠️ Features

* Context-aware prompts (memory)
* Easily customizable for your LLM endpoints
* Show your current usage.
* Optional pre-cooked system prompts.

## 📝 Configuration & Prompts

ZapGPT stores its configuration and prompts in `~/.config/zapgpt/`:

- **Configuration directory**: `~/.config/zapgpt/`
- **Prompts directory**: `~/.config/zapgpt/prompts/`
- **Database file**: `~/.config/zapgpt/gpt_usage.db`

### Managing Prompts

On first run, zapgpt automatically copies default prompts to your config directory. You can:

- **View config location**: `zapgpt --config`
- **List available prompts**: `zapgpt --list-prompt`
- **Use a specific prompt**: `zapgpt --use-prompt coding "Your question"`
- **Add custom prompts**: Create `.json` files in `~/.config/zapgpt/prompts/`
- **Modify existing prompts**: Edit the `.json` files in your prompts directory

### Default Prompts Included

- `coding` - Programming and development assistance
- `cyber_awareness` - Cybersecurity guidance
- `vuln_assessment` - Vulnerability assessment help
- `kalihacking` - Kali Linux and penetration testing
- `prompting` - Prompt engineering assistance
- `powershell` - PowerShell scripting help
- `default` - General purpose prompt
- `common_base` - Base prompt added to all others

### v2 Features

* Script now uses class and is much more well organized.
* Prompts are not hard-coded in the script. You can simply drop in any new
  system prompt in prompts folder and use it.
* ✅ **Multi-Provider Support**: Supports OpenAI, OpenRouter, Together, Replicate, DeepInfra, and GitHub AI
* ✅ **Easy Provider Switching**: Use `--provider` flag to switch between providers
* ✅ **Model Selection**: Override model with `-m` flag for any provider

## 🧪 Example

```bash
$ zapgpt "Summarize the Unix philosophy."
> Small is beautiful. Do one thing well. Write programs that work together.
```

## 🙌 Credits

Built with ❤️ by [Amit Agarwal aka](https://github.com/raj77in) — because LLMs deserve a good CLI.

## 🧙‍♂️ License

MIT — do whatever, just don't blame me if it becomes sentient.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "zapgpt",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Amit Agarwal <amit@example.com>",
    "keywords": "ai, llm, gpt, openai, anthropic, mistral, cli, chat, terminal",
    "author": null,
    "author_email": "Amit Agarwal <amit@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/b6/42/62601b95534686d6d939b4472359485f62f559d8592dab27394bb0bb582c/zapgpt-3.1.1.tar.gz",
    "platform": null,
    "description": "# zapgpt\n\nA minimalist CLI tool to chat with LLMs from your terminal. Supports multiple providers including OpenAI, OpenRouter, Together, Replicate, DeepInfra, and GitHub AI.\n\n\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2588\u2557  \u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\n\u255a\u2550\u2550\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2554\u2550\u2550\u2550\u2550\u255d \u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u255a\u2550\u2550\u2588\u2588\u2554\u2550\u2550\u255d\n  \u2588\u2588\u2588\u2554\u255d \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551  \u2588\u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d   \u2588\u2588\u2551\n \u2588\u2588\u2588\u2554\u255d  \u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2550\u255d \u2588\u2588\u2551   \u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2550\u255d    \u2588\u2588\u2551\n\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2588\u2588\u2551  \u2588\u2588\u2551\u2588\u2588\u2551     \u255a\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551        \u2588\u2588\u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u255d\u255a\u2550\u255d  \u255a\u2550\u255d\u255a\u2550\u255d      \u255a\u2550\u2550\u2550\u2550\u2550\u255d \u255a\u2550\u255d        \u255a\u2550\u255d\n         GPT on the CLI. Like a boss.\n\n`zapgpt` is a minimalist CLI tool to chat with LLMs from your terminal. No bloated UI, just fast raw GPT magic, straight from the shell. With pre-cooked system prompt for Ethical hacking, code, file attachment and a good default prompt and usage tracking, I hope you find it useful. No extra features or frills. Modify it as you need it with a simple one file script.\n\nUpdated to version v2.\n\n[![Introduction](https://i.ytimg.com/vi/hpiVtj_gSD4/hqdefault.jpg)](https://www.youtube.com/watch?v=hpiVtj_gSD4)\n\n## \ud83d\udcbe Requirements\n\n* Python 3.8+\n* `uv` (recommended - blazingly fast Python package manager)\n* pip (alternative to uv)\n\n## \ud83d\ude80 Installation\n\n### Option 1: Install with `uv` (\u26a1 Recommended)\n\n```bash\nuv tool install zapgpt\n```\n\n> **Why uv?** `uv` is blazingly fast and handles CLI tools perfectly. It installs zapgpt globally and manages dependencies automatically.\n\n**Don't have uv?** Install it first:\n```bash\n# macOS/Linux\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Windows\npowershell -c \"irm https://astral.sh/uv/install.ps1 | iex\"\n\n# Or with pip\npip install uv\n```\n\n### Option 2: Install from PyPI\n\n```bash\nuv tool install zapgpt\n```\n\n### Option 3: Development Installation\n\n**With uv (recommended):**\n```bash\ngit clone https://github.com/yourusername/zapgpt.git\ncd zapgpt\nuv sync\nuv run zapgpt \"test\"\n\n# Optional: Set up pre-commit hooks for code quality\n./setup-pre-commit.sh\n```\n\n**With pip:**\n```bash\ngit clone https://github.com/yourusername/zapgpt.git\ncd zapgpt\npip install -e .\n```\n\n### Option 4: From Source (Classic method)\n\n```bash\ngit clone https://github.com/yourusername/zapgpt.git\ncd zapgpt\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n```\n\n## \ud83d\udd11 Environment Variables\n\nZapGPT only requires the API key for the provider you're using. Set the appropriate environment variable:\n\n| Provider | Environment Variable | Get API Key |\n|----------|---------------------|-------------|\n| OpenAI | `OPENAI_API_KEY` | [platform.openai.com](https://platform.openai.com/account/api-keys) |\n| OpenRouter | `OPENROUTER_KEY` | [openrouter.ai](https://openrouter.ai/keys) |\n| Together | `TOGETHER_API_KEY` | [api.together.xyz](https://api.together.xyz/settings/api-keys) |\n| Replicate | `REPLICATE_API_TOKEN` | [replicate.com](https://replicate.com/account/api-tokens) |\n| DeepInfra | `DEEPINFRA_API_TOKEN` | [deepinfra.com](https://deepinfra.com/dash/api_keys) |\n| GitHub | `GITHUB_KEY` | [github.com](https://github.com/settings/tokens) |\n\n**Example:**\n```bash\n# For OpenAI (default provider)\nexport OPENAI_API_KEY=\"your-openai-api-key-here\"\n\n# For OpenRouter\nexport OPENROUTER_KEY=\"your-openrouter-key-here\"\n```\n\n## \ud83e\udde0 Usage\n\nAfter installation, you can use `zapgpt` directly from the command line:\n\n```bash\n# Basic usage (uses OpenAI by default)\nzapgpt \"What's the meaning of life?\"\n\n# Use different providers\nzapgpt --provider openrouter \"Explain quantum computing\"\nzapgpt --provider together \"Write a Python function\"\nzapgpt --provider github \"Debug this code\"\n\n# Use specific models\nzapgpt -m gpt-4o \"Complex reasoning task\"\nzapgpt --provider openrouter -m anthropic/claude-3.5-sonnet \"Creative writing\"\n```\n\n### Interactive Mode\n\n```bash\nzapgpt  # Starts interactive mode\n```\n\n### Development Usage\n\n**With uv:**\n```bash\nuv run zapgpt \"Your question here\"\n```\n\n**With Python:**\n```bash\npython -m zapgpt \"Your question here\"\n# or\npython zapgpt/main.py \"Your question here\"\n```\n\n### Quiet Mode (for Scripting)\n\n```bash\n# Suppress all output except the LLM response\nzapgpt --quiet \"What is the capital of France?\"\n\n# Perfect for shell scripts\nRESPONSE=$(zapgpt -q \"Summarize this in one word: Machine Learning\")\necho \"Result: $RESPONSE\"\n```\n\n### File Input (for Automation)\n\n```bash\n# Send file contents to LLM\nzapgpt --file /path/to/file.txt \"Analyze this log file\"\n\n# Analyze command output\nnmap -sV target.com > scan_results.txt\nzapgpt -f scan_results.txt --use-prompt vuln_assessment \"Analyze these scan results\"\n\n# Process multiple files\nfor file in *.log; do\n    zapgpt -q -f \"$file\" \"Summarize security events\" >> summary.txt\ndone\n```\n\n### Automation Examples\n\n```bash\n# Penetration Testing Agent\n#!/bin/bash\nTARGET=\"example.com\"\n\n# 1. Reconnaissance\nnmap -sV $TARGET > nmap_results.txt\nRESPONSE=$(zapgpt -q -f nmap_results.txt --use-prompt vuln_assessment \"Identify potential vulnerabilities\")\necho \"Vulnerabilities found: $RESPONSE\"\n\n# 2. Web Analysis\nnikto -h $TARGET > nikto_results.txt\nzapgpt -f nikto_results.txt \"Prioritize these web vulnerabilities\" > web_analysis.txt\n\n# 3. Generate Report\nzapgpt -q \"Create executive summary\" -f web_analysis.txt > final_report.md\n```\n\n```bash\n# Log Analysis Agent\n#!/bin/bash\n# Monitor and analyze system logs\ntail -n 100 /var/log/auth.log > recent_auth.log\nALERT=$(zapgpt -q -f recent_auth.log \"Detect suspicious login attempts\")\n\nif [[ $ALERT == *\"suspicious\"* ]]; then\n    echo \"Security Alert: $ALERT\" | mail -s \"Security Alert\" admin@company.com\nfi\n```\n\n```bash\n# Code Review Agent\n#!/bin/bash\n# Automated code review\nfor file in src/*.py; do\n    REVIEW=$(zapgpt -q -f \"$file\" --use-prompt coding \"Review this code for security issues\")\n    echo \"File: $file\" >> code_review.md\n    echo \"Review: $REVIEW\" >> code_review.md\n    echo \"---\" >> code_review.md\ndone\n```\n\n## \ud83d\udc0d Programmatic API\n\nZapGPT can be imported and used in your Python scripts:\n\n### Basic Usage\n\n```python\nfrom zapgpt import query_llm\n\n# Simple query\nresponse = query_llm(\"What is Python?\", provider=\"openai\")\nprint(response)\n\n# With different provider\nresponse = query_llm(\n    \"Explain quantum computing\",\n    provider=\"openrouter\",\n    model=\"anthropic/claude-3.5-sonnet\"\n)\n```\n\n### Advanced Usage\n\n```python\nfrom zapgpt import query_llm\n\n# Use predefined prompts\ncode_review = query_llm(\n    \"Review this Python function: def hello(): print('hi')\",\n    provider=\"openai\",\n    use_prompt=\"coding\",\n    model=\"gpt-4o\"\n)\n\n# Custom system prompt\nresponse = query_llm(\n    \"Write a haiku about programming\",\n    provider=\"openai\",\n    system_prompt=\"You are a poetic programming mentor.\",\n    temperature=0.8\n)\n\n# Error handling\ntry:\n    response = query_llm(\"Hello\", provider=\"openai\")\nexcept EnvironmentError as e:\n    print(f\"Missing API key: {e}\")\nexcept ValueError as e:\n    print(f\"Invalid provider: {e}\")\n```\n\n### API Parameters\n\n| Parameter | Type | Default | Description |\n|-----------|------|---------|-------------|\n| `prompt` | str | Required | Your question/prompt |\n| `provider` | str | \"openai\" | LLM provider to use |\n| `model` | str | None | Specific model (overrides prompt default) |\n| `system_prompt` | str | None | Custom system prompt |\n| `use_prompt` | str | None | Use predefined prompt template |\n| `temperature` | float | 0.3 | Response randomness (0.0-1.0) |\n| `max_tokens` | int | 4096 | Maximum response length |\n| `quiet` | bool | True | Suppress logging output |\n\n### Environment Variables\n\nSet the appropriate API key for your chosen provider:\n\n```python\nimport os\nos.environ['OPENAI_API_KEY'] = 'your-key-here'\n\nfrom zapgpt import query_llm\nresponse = query_llm(\"Hello world\", provider=\"openai\")\n```\n\n### Python Automation Examples\n\n```python\n# Penetration Testing Automation\nimport subprocess\nfrom zapgpt import query_llm\n\ndef analyze_nmap_scan(target):\n    # Run nmap scan\n    result = subprocess.run(['nmap', '-sV', target], capture_output=True, text=True)\n\n    # Analyze with LLM\n    analysis = query_llm(\n        f\"Analyze this nmap scan: {result.stdout}\",\n        provider=\"openai\",\n        use_prompt=\"vuln_assessment\"\n    )\n    return analysis\n\nvulns = analyze_nmap_scan(\"example.com\")\nprint(f\"Vulnerabilities: {vulns}\")\n```\n\n```python\n# Log Analysis Agent\nfrom zapgpt import query_llm\n\ndef monitor_logs(log_file):\n    with open(log_file, 'r') as f:\n        logs = f.read()\n\n    alert = query_llm(\n        f\"Detect suspicious activity: {logs}\",\n        provider=\"openai\",\n        quiet=True\n    )\n\n    if \"suspicious\" in alert.lower():\n        print(f\"ALERT: {alert}\")\n        return True\n    return False\n\n# Monitor auth logs\nmonitor_logs('/var/log/auth.log')\n```\n\n[![Using zapgpt for pentesting on Kali](https://i.ytimg.com/vi/vDTwIsEUheE/hqdefault.jpg)](https://www.youtube.com/watch?v=hpiVtj_gSD4)\n\n\n## \ud83d\udee0\ufe0f Features\n\n* Context-aware prompts (memory)\n* Easily customizable for your LLM endpoints\n* Show your current usage.\n* Optional pre-cooked system prompts.\n\n## \ud83d\udcdd Configuration & Prompts\n\nZapGPT stores its configuration and prompts in `~/.config/zapgpt/`:\n\n- **Configuration directory**: `~/.config/zapgpt/`\n- **Prompts directory**: `~/.config/zapgpt/prompts/`\n- **Database file**: `~/.config/zapgpt/gpt_usage.db`\n\n### Managing Prompts\n\nOn first run, zapgpt automatically copies default prompts to your config directory. You can:\n\n- **View config location**: `zapgpt --config`\n- **List available prompts**: `zapgpt --list-prompt`\n- **Use a specific prompt**: `zapgpt --use-prompt coding \"Your question\"`\n- **Add custom prompts**: Create `.json` files in `~/.config/zapgpt/prompts/`\n- **Modify existing prompts**: Edit the `.json` files in your prompts directory\n\n### Default Prompts Included\n\n- `coding` - Programming and development assistance\n- `cyber_awareness` - Cybersecurity guidance\n- `vuln_assessment` - Vulnerability assessment help\n- `kalihacking` - Kali Linux and penetration testing\n- `prompting` - Prompt engineering assistance\n- `powershell` - PowerShell scripting help\n- `default` - General purpose prompt\n- `common_base` - Base prompt added to all others\n\n### v2 Features\n\n* Script now uses class and is much more well organized.\n* Prompts are not hard-coded in the script. You can simply drop in any new\n  system prompt in prompts folder and use it.\n* \u2705 **Multi-Provider Support**: Supports OpenAI, OpenRouter, Together, Replicate, DeepInfra, and GitHub AI\n* \u2705 **Easy Provider Switching**: Use `--provider` flag to switch between providers\n* \u2705 **Model Selection**: Override model with `-m` flag for any provider\n\n## \ud83e\uddea Example\n\n```bash\n$ zapgpt \"Summarize the Unix philosophy.\"\n> Small is beautiful. Do one thing well. Write programs that work together.\n```\n\n## \ud83d\ude4c Credits\n\nBuilt with \u2764\ufe0f by [Amit Agarwal aka](https://github.com/raj77in) \u2014 because LLMs deserve a good CLI.\n\n## \ud83e\uddd9\u200d\u2642\ufe0f License\n\nMIT \u2014 do whatever, just don't blame me if it becomes sentient.\n",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Amit Agarwal\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.\n        ",
    "summary": "A command-line tool for interacting with various LLM providers",
    "version": "3.1.1",
    "project_urls": {
        "Documentation": "https://github.com/raj77in/zapgpt#readme",
        "Homepage": "https://github.com/raj77in/zapgpt",
        "Issues": "https://github.com/raj77in/zapgpt/issues",
        "Repository": "https://github.com/raj77in/zapgpt"
    },
    "split_keywords": [
        "ai",
        " llm",
        " gpt",
        " openai",
        " anthropic",
        " mistral",
        " cli",
        " chat",
        " terminal"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ff47e5e7d8a05c1f9261299bd06c13a65228873abcdff291b18f0fd6385369e8",
                "md5": "f4c503ca192ba0440569a7de73a252a6",
                "sha256": "0f9d5f58dc78a95c1df4e3f2ee9c6249a5d455aab1837eec88189ed3f1f97ff2"
            },
            "downloads": -1,
            "filename": "zapgpt-3.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f4c503ca192ba0440569a7de73a252a6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 34761,
            "upload_time": "2025-07-13T11:25:01",
            "upload_time_iso_8601": "2025-07-13T11:25:01.859013Z",
            "url": "https://files.pythonhosted.org/packages/ff/47/e5e7d8a05c1f9261299bd06c13a65228873abcdff291b18f0fd6385369e8/zapgpt-3.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b64262601b95534686d6d939b4472359485f62f559d8592dab27394bb0bb582c",
                "md5": "2a52c6f993263db3570804bf33db4af9",
                "sha256": "974c3cb93c8ef25bbb5d01bb21fe0543b152a1fd62e224205a714a5fabeb273b"
            },
            "downloads": -1,
            "filename": "zapgpt-3.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "2a52c6f993263db3570804bf33db4af9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 42208,
            "upload_time": "2025-07-13T11:25:02",
            "upload_time_iso_8601": "2025-07-13T11:25:02.885792Z",
            "url": "https://files.pythonhosted.org/packages/b6/42/62601b95534686d6d939b4472359485f62f559d8592dab27394bb0bb582c/zapgpt-3.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-13 11:25:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "raj77in",
    "github_project": "zapgpt#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "openai",
            "specs": [
                [
                    "==",
                    "1.77.0"
                ]
            ]
        },
        {
            "name": "Requests",
            "specs": [
                [
                    "==",
                    "2.32.3"
                ]
            ]
        },
        {
            "name": "tabulate",
            "specs": [
                [
                    "==",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "tiktoken",
            "specs": [
                [
                    "==",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    "==",
                    "13.9.4"
                ]
            ]
        },
        {
            "name": "pygments",
            "specs": [
                [
                    "==",
                    "2.17.2"
                ]
            ]
        },
        {
            "name": "httpx",
            "specs": [
                [
                    "==",
                    "0.28.1"
                ]
            ]
        },
        {
            "name": "rich-argparse",
            "specs": [
                [
                    ">=",
                    "1.7.1"
                ]
            ]
        },
        {
            "name": "importlib_resources",
            "specs": [
                [
                    ">=",
                    "5.0.0"
                ]
            ]
        }
    ],
    "lcname": "zapgpt"
}
        
Elapsed time: 0.42192s