smarttest-cli


Namesmarttest-cli JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummarySmartTest CLI - Execute test scenarios with secure credential handling
upload_time2025-10-08 12:53:31
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords testing api cli automation ci-cd continuous-integration api-testing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SmartTest CLI

Enterprise-ready CLI for executing test scenarios with secure, zero-credential-exposure architecture.

## Features

🔒 **Zero Credential Exposure**: Auth tokens never leave your network
⚡ **Concurrent Execution**: Run up to 5 scenarios simultaneously
🎯 **Continue-on-Error**: Individual failures don't stop execution
📊 **Real-time Progress**: Live progress updates with rich terminal output
📄 **CI Integration**: JUnit XML reports for CI/CD pipelines
🌐 **Network Aware**: Proxy and custom CA support for enterprise networks

## Installation

Install the CLI from PyPI:

```bash
pip install smarttest-cli
```

### Get Your PAT Token

1. Visit the SmartTest platform: https://ai-smart-test.vercel.app/
2. Sign up or log in to your account
3. Navigate to **Settings** → **API Tokens**
4. Click **Generate PAT Token**
5. Copy your Personal Access Token (PAT)

**What is a PAT Token?**
A Personal Access Token (PAT) authenticates the CLI with the SmartTest backend. It allows the CLI to fetch test scenarios and submit results securely. This token is separate from any credentials needed to test your APIs.

## Quick Start

### 1. Set Your PAT Token

```bash
export SMARTTEST_TOKEN=your_pat_token_here
```

### 2. Run Test Scenarios

```bash
# Run a specific scenario
smarttest --scenario-id 123

# Run all scenarios for an endpoint
smarttest --endpoint-id 456

# Run all scenarios for a system
smarttest --system-id 789

# With JUnit XML report for CI
smarttest --system-id 789 --report junit.xml

# With JSON output for CI/CD parsing
smarttest --system-id 789 --format json > results.json

# Combined: JSON output + JUnit XML report
smarttest --system-id 789 --format json --report junit.xml > results.json
```

## Output Formats

The CLI supports multiple output formats for different use cases:

| Format | Flag | Use Case | Colors | Progress | Parseable |
|--------|------|----------|--------|----------|-----------|
| **Terminal** | (default) | Local development | ✅ | ✅ | ❌ |
| **JSON** | `--format json` | CI/CD pipelines | ❌ | ❌ | ✅ |
| **JUnit XML** | `--report file.xml` | Test reporting | N/A | N/A | ✅ |

**Quick Examples:**
```bash
# Local development (colorful output)
smarttest --system-id 123

# CI/CD (machine-readable)
smarttest --system-id 123 --format json > results.json

# Test reporting tools
smarttest --system-id 123 --report junit.xml

# All together
smarttest --system-id 123 --format json --report junit.xml > results.json
```

---

### Terminal Output (Default)

Rich, colorful output with real-time progress and endpoint grouping:

```bash
smarttest --system-id 123
```

**Features:**
- 🎨 Color-coded results (green=passed, red=failed, yellow=errors)
- 📊 Real-time progress bar
- 📍 Results grouped by API endpoint
- 🔍 Detailed validation failure messages

**Example:**
```
📋 Discovered 10 scenarios across 3 endpoints:
   • POST /auth/login → 3 scenario(s)
   • GET /users → 5 scenario(s)
   • POST /orders → 2 scenario(s)

⚡ Executing scenarios... ✅ 8 passed, ❌ 2 failed [██████████] 10/10

Results by Endpoint:

✅ POST /auth/login → 3/3 passed

❌ GET /users → 3/5 passed
     ↳ Invalid user ID: Expected status 404, got 500

✅ POST /orders → 2/2 passed

Overall Summary:
✅ 8 passed
❌ 2 failed

Summary: 8/10 scenarios passed (80.0% success rate)
```

### JSON Output (CI/CD Integration)

Machine-readable JSON for programmatic parsing:

```bash
smarttest --system-id 123 --format json > results.json
```

**Features:**
- 🤖 Structured data with summary statistics
- 📊 Endpoint-grouped results
- 🔍 Detailed validation failures
- 📈 Response times and error messages

**Output Structure:**
```json
{
  "summary": {
    "total": 10,
    "passed": 8,
    "failed": 2,
    "errors": 0,
    "success_rate": 80.0,
    "duration_seconds": 12.5
  },
  "endpoints": [
    {
      "endpoint": "POST /auth/login",
      "total": 3,
      "passed": 3,
      "failed": 0,
      "errors": 0,
      "scenarios": [
        {
          "scenario_id": 123,
          "scenario_name": "Valid login",
          "status": "passed",
          "execution_status": "success",
          "http_status": 200,
          "response_time_ms": 145,
          "validations": [...]
        }
      ]
    }
  ],
  "results": [...]
}
```

**Parse with jq:**
```bash
# Get success rate
jq '.summary.success_rate' results.json

# List failed scenarios
jq '.results[] | select(.status == "failed") | .scenario_name' results.json

# Get endpoint statistics
jq '.endpoints[] | {endpoint: .endpoint, passed: .passed, failed: .failed}' results.json
```

### Combined Output

Generate both JSON and JUnit XML:

```bash
smarttest --system-id 123 --format json --report junit.xml > results.json
```

**Use Cases:**
- CI/CD dashboards need JSON for metrics
- Test reporting tools need JUnit XML
- Both formats complement each other

### Quiet Mode (Future)

For minimal output in scripts:

```bash
# Coming soon
smarttest --system-id 123 --quiet
# Output: 8/10 passed (80%)
```

## Configuration

### Optional Configuration File

Create `.smarttest.yml` in your project root for advanced configuration:

```yaml
# API Configuration
api_url: "https://api.smarttest.com"

# Execution Settings
concurrency: 5
timeout: 30

# Enterprise Network Settings
proxy:
  http_proxy: "http://proxy.company.com:8080"
  https_proxy: "https://proxy.company.com:8080"

tls:
  ca_bundle_path: "/path/to/ca-bundle.pem"
  verify_ssl: true
```

**Configuration Options:**
- `api_url`: SmartTest API endpoint (default: https://api.smarttest.com)
- `concurrency`: Number of scenarios to run in parallel (default: 5)
- `timeout`: Request timeout in seconds (default: 30)
- `proxy`: HTTP/HTTPS proxy settings for corporate networks
- `tls`: Custom CA bundle and SSL verification options

## Authentication

### SmartTest Authentication (Required)

The CLI requires a **Personal Access Token (PAT)** to authenticate with SmartTest:

```bash
export SMARTTEST_TOKEN=your_pat_token_here
```

**How to get your PAT token:**
1. Visit https://ai-smart-test.vercel.app/
2. Go to **Settings** → **API Tokens**
3. Generate a new PAT token
4. Copy and save it securely

### Zero-Credential Exposure (Advanced)

**When testing APIs that require authentication**, SmartTest uses a zero-credential-exposure model to keep your API credentials secure.

#### How It Works

1. **SmartTest backend sends auth config references** (metadata only, no actual credentials)
2. **CLI resolves credentials locally** from your environment variables
3. **Credentials never leave your network** or reach SmartTest servers
4. **Requests are made directly from your environment** to the target API

#### Setting Up Target API Credentials

Only needed if you're testing APIs that require authentication:

```bash
# Bearer Token Authentication
export AUTH_CONFIG_123_TOKEN=your_api_bearer_token

# Basic Authentication
export AUTH_CONFIG_456_USERNAME=api_username
export AUTH_CONFIG_456_PASSWORD=api_password

# API Key Authentication
export AUTH_CONFIG_789_API_KEY=your_api_key
```

**Pattern:** `AUTH_CONFIG_{ID}_{CREDENTIAL_TYPE}`
- `{ID}`: Auth configuration ID from SmartTest dashboard
- `{CREDENTIAL_TYPE}`: TOKEN, USERNAME, PASSWORD, or API_KEY

**Example Scenario:**
- You're testing your company's API at `https://api.company.com`
- The API requires a bearer token for authentication
- SmartTest knows the API needs auth (config ID: 123)
- You set: `export AUTH_CONFIG_123_TOKEN=company_bearer_token_xyz`
- CLI resolves the token locally and includes it in requests
- SmartTest never sees `company_bearer_token_xyz`

## CI/CD Integration

### GitHub Actions

**Basic Example (SmartTest token only):**
```yaml
name: API Tests

on: [push]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Install SmartTest CLI
        run: pip install smarttest-cli

      - name: Run API Tests
        env:
          # PAT token for SmartTest authentication
          SMARTTEST_TOKEN: ${{ secrets.SMARTTEST_TOKEN }}
        run: |
          smarttest --system-id 123 --report junit.xml

      - name: Publish Test Results
        uses: dorny/test-reporter@v1
        if: always()
        with:
          name: SmartTest Results
          path: junit.xml
          reporter: java-junit
```

**Advanced Example (with target API credentials):**
```yaml
name: API Tests with Auth

on: [push]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Install SmartTest CLI
        run: pip install smarttest-cli

      - name: Run API Tests
        env:
          # PAT token for SmartTest authentication
          SMARTTEST_TOKEN: ${{ secrets.SMARTTEST_TOKEN }}
          
          # Target API credentials (zero-credential exposure)
          AUTH_CONFIG_123_TOKEN: ${{ secrets.PRODUCTION_API_TOKEN }}
          AUTH_CONFIG_456_USERNAME: ${{ secrets.STAGING_USERNAME }}
          AUTH_CONFIG_456_PASSWORD: ${{ secrets.STAGING_PASSWORD }}
        run: |
          smarttest --system-id 123 --format json --report junit.xml > results.json

      - name: Check Success Rate
        run: |
          SUCCESS_RATE=$(jq '.summary.success_rate' results.json)
          echo "Test Success Rate: ${SUCCESS_RATE}%"
          
          # Fail if success rate < 80%
          if (( $(echo "$SUCCESS_RATE < 80" | bc -l) )); then
            echo "❌ Test success rate below 80% threshold"
            exit 1
          fi

      - name: Upload Results
        if: always()
        uses: actions/upload-artifact@v3
        with:
          name: test-results
          path: |
            junit.xml
            results.json

      - name: Publish Test Report
        uses: dorny/test-reporter@v1
        if: always()
        with:
          name: API Test Results
          path: junit.xml
          reporter: java-junit
```

### GitLab CI

**Basic Example:**
```yaml
api-tests:
  stage: test
  image: python:3.11
  before_script:
    - pip install smarttest-cli
  script:
    - smarttest --system-id 123 --report junit.xml
  artifacts:
    when: always
    reports:
      junit: junit.xml
  variables:
    # PAT token for SmartTest authentication
    SMARTTEST_TOKEN: $SMARTTEST_TOKEN
```

**Advanced Example (with JSON output):**
```yaml
api-tests:
  stage: test
  image: python:3.11
  before_script:
    - pip install smarttest-cli
  script:
    - smarttest --system-id 123 --format json --report junit.xml > results.json
    - |
      SUCCESS_RATE=$(jq '.summary.success_rate' results.json)
      echo "API Tests: ${SUCCESS_RATE}% passed"
  artifacts:
    when: always
    reports:
      junit: junit.xml
    paths:
      - results.json
  variables:
    # PAT token for SmartTest authentication
    SMARTTEST_TOKEN: $SMARTTEST_TOKEN
    # Target API credentials (if needed)
    AUTH_CONFIG_123_TOKEN: $PRODUCTION_API_TOKEN
```

### Jenkins

**Basic Pipeline:**
```groovy
pipeline {
    agent any
    environment {
        // PAT token for SmartTest authentication
        SMARTTEST_TOKEN = credentials('smarttest-token')
    }
    stages {
        stage('Install CLI') {
            steps {
                sh 'pip install smarttest-cli'
            }
        }
        stage('Run API Tests') {
            steps {
                sh 'smarttest --system-id 123 --format json --report junit.xml > results.json'

                script {
                    def results = readJSON file: 'results.json'
                    echo "Success Rate: ${results.summary.success_rate}%"
                    echo "Passed: ${results.summary.passed}"
                    echo "Failed: ${results.summary.failed}"

                    if (results.summary.success_rate < 80) {
                        error("Test success rate below 80%")
                    }
                }
            }
            post {
                always {
                    junit 'junit.xml'
                    archiveArtifacts artifacts: 'results.json', fingerprint: true
                }
            }
        }
    }
}
```

**With Target API Credentials:**
```groovy
pipeline {
    agent any
    environment {
        // PAT token for SmartTest authentication
        SMARTTEST_TOKEN = credentials('smarttest-token')
        
        // Target API credentials (zero-credential exposure)
        AUTH_CONFIG_123_TOKEN = credentials('production-api-token')
        AUTH_CONFIG_456_USERNAME = credentials('staging-username')
        AUTH_CONFIG_456_PASSWORD = credentials('staging-password')
    }
    stages {
        // ... same as above
    }
}
```

## Command Reference

### Basic Commands

```bash
# Run by scenario ID
smarttest --scenario-id 123

# Run by endpoint ID
smarttest --endpoint-id 456

# Run by system ID
smarttest --system-id 789
```

### Output Options

```bash
# Terminal output (default, with colors and progress)
smarttest --system-id 123

# JSON output (machine-readable)
smarttest --system-id 123 --format json

# Generate JUnit XML report
smarttest --system-id 123 --report junit.xml

# JSON + JUnit combined
smarttest --system-id 123 --format json --report junit.xml > results.json
```

### Configuration

```bash
# Use custom config file
smarttest --system-id 123 --config my-config.yml

# Config file + custom output
smarttest --system-id 123 --config prod.yml --format json
```

### Exit Codes

- **0**: All scenarios passed
- **1**: One or more scenarios failed or had errors
- **130**: Execution interrupted (Ctrl+C)

The exit code is the same regardless of `--format` option.

## Error Handling

The CLI provides comprehensive error classification:

- **✅ Success**: HTTP request succeeded, all validations passed
- **❌ Failed**: HTTP request succeeded, but validations failed
- **⚠️  Network Timeout**: Request timed out
- **⚠️  Network Error**: Connection failed
- **⚠️  Auth Error**: Authentication resolution failed
- **⚠️  Unknown Error**: Unexpected error occurred

## Architecture

```
┌─ Scenario Discovery (API with rate limiting)
├─ Skip scenarios without validations
├─ Concurrent Execution (max 5)
│  ├─ Fetch definition (auth config references only)
│  ├─ Resolve auth locally (zero credential exposure)
│  ├─ Execute HTTP request (with comprehensive error handling)
│  └─ Submit results (continue on any error)
└─ Generate reports and exit
```

## Troubleshooting

### Authentication Issues

**SmartTest Authentication:**
```bash
# Verify your PAT token is set
echo $SMARTTEST_TOKEN

# Test connection to SmartTest
curl -H "Authorization: Bearer $SMARTTEST_TOKEN" https://api.smarttest.com/health
```

**Target API Authentication:**
```bash
# Verify target API credentials are set
echo $AUTH_CONFIG_123_TOKEN

# Check which auth configs your scenarios use in the SmartTest dashboard
```

### Network Issues

```bash
# Test with custom config
smarttest --scenario-id 123 --config .smarttest.yml

# Enable request debugging
SMARTTEST_DEBUG=1 smarttest --scenario-id 123

# Test single scenario first
smarttest --scenario-id 123 --format json
```

### Rate Limiting

The CLI automatically handles rate limiting with exponential backoff. If you encounter persistent rate limiting:

1. Reduce concurrency in `.smarttest.yml`
2. Contact support to increase rate limits
3. Spread execution across longer time periods

## Support

- 🌐 Platform: https://ai-smart-test.vercel.app/
- 📚 Documentation: https://ai-smart-test.vercel.app/docs
- 💬 Support: ai.smart.test.contact@gmail.com

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "smarttest-cli",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "SmartTest Team <ai.smart.test.contact@gmail.com>",
    "keywords": "testing, api, cli, automation, ci-cd, continuous-integration, api-testing",
    "author": null,
    "author_email": "SmartTest Team <ai.smart.test.contact@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/8c/41/3c94926c581d8cb3e3e7f949cf3bddca2e4afc75ef243399dd4570d8e505/smarttest_cli-0.1.2.tar.gz",
    "platform": null,
    "description": "# SmartTest CLI\n\nEnterprise-ready CLI for executing test scenarios with secure, zero-credential-exposure architecture.\n\n## Features\n\n\ud83d\udd12 **Zero Credential Exposure**: Auth tokens never leave your network\n\u26a1 **Concurrent Execution**: Run up to 5 scenarios simultaneously\n\ud83c\udfaf **Continue-on-Error**: Individual failures don't stop execution\n\ud83d\udcca **Real-time Progress**: Live progress updates with rich terminal output\n\ud83d\udcc4 **CI Integration**: JUnit XML reports for CI/CD pipelines\n\ud83c\udf10 **Network Aware**: Proxy and custom CA support for enterprise networks\n\n## Installation\n\nInstall the CLI from PyPI:\n\n```bash\npip install smarttest-cli\n```\n\n### Get Your PAT Token\n\n1. Visit the SmartTest platform: https://ai-smart-test.vercel.app/\n2. Sign up or log in to your account\n3. Navigate to **Settings** \u2192 **API Tokens**\n4. Click **Generate PAT Token**\n5. Copy your Personal Access Token (PAT)\n\n**What is a PAT Token?**\nA Personal Access Token (PAT) authenticates the CLI with the SmartTest backend. It allows the CLI to fetch test scenarios and submit results securely. This token is separate from any credentials needed to test your APIs.\n\n## Quick Start\n\n### 1. Set Your PAT Token\n\n```bash\nexport SMARTTEST_TOKEN=your_pat_token_here\n```\n\n### 2. Run Test Scenarios\n\n```bash\n# Run a specific scenario\nsmarttest --scenario-id 123\n\n# Run all scenarios for an endpoint\nsmarttest --endpoint-id 456\n\n# Run all scenarios for a system\nsmarttest --system-id 789\n\n# With JUnit XML report for CI\nsmarttest --system-id 789 --report junit.xml\n\n# With JSON output for CI/CD parsing\nsmarttest --system-id 789 --format json > results.json\n\n# Combined: JSON output + JUnit XML report\nsmarttest --system-id 789 --format json --report junit.xml > results.json\n```\n\n## Output Formats\n\nThe CLI supports multiple output formats for different use cases:\n\n| Format | Flag | Use Case | Colors | Progress | Parseable |\n|--------|------|----------|--------|----------|-----------|\n| **Terminal** | (default) | Local development | \u2705 | \u2705 | \u274c |\n| **JSON** | `--format json` | CI/CD pipelines | \u274c | \u274c | \u2705 |\n| **JUnit XML** | `--report file.xml` | Test reporting | N/A | N/A | \u2705 |\n\n**Quick Examples:**\n```bash\n# Local development (colorful output)\nsmarttest --system-id 123\n\n# CI/CD (machine-readable)\nsmarttest --system-id 123 --format json > results.json\n\n# Test reporting tools\nsmarttest --system-id 123 --report junit.xml\n\n# All together\nsmarttest --system-id 123 --format json --report junit.xml > results.json\n```\n\n---\n\n### Terminal Output (Default)\n\nRich, colorful output with real-time progress and endpoint grouping:\n\n```bash\nsmarttest --system-id 123\n```\n\n**Features:**\n- \ud83c\udfa8 Color-coded results (green=passed, red=failed, yellow=errors)\n- \ud83d\udcca Real-time progress bar\n- \ud83d\udccd Results grouped by API endpoint\n- \ud83d\udd0d Detailed validation failure messages\n\n**Example:**\n```\n\ud83d\udccb Discovered 10 scenarios across 3 endpoints:\n   \u2022 POST /auth/login \u2192 3 scenario(s)\n   \u2022 GET /users \u2192 5 scenario(s)\n   \u2022 POST /orders \u2192 2 scenario(s)\n\n\u26a1 Executing scenarios... \u2705 8 passed, \u274c 2 failed [\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588] 10/10\n\nResults by Endpoint:\n\n\u2705 POST /auth/login \u2192 3/3 passed\n\n\u274c GET /users \u2192 3/5 passed\n     \u21b3 Invalid user ID: Expected status 404, got 500\n\n\u2705 POST /orders \u2192 2/2 passed\n\nOverall Summary:\n\u2705 8 passed\n\u274c 2 failed\n\nSummary: 8/10 scenarios passed (80.0% success rate)\n```\n\n### JSON Output (CI/CD Integration)\n\nMachine-readable JSON for programmatic parsing:\n\n```bash\nsmarttest --system-id 123 --format json > results.json\n```\n\n**Features:**\n- \ud83e\udd16 Structured data with summary statistics\n- \ud83d\udcca Endpoint-grouped results\n- \ud83d\udd0d Detailed validation failures\n- \ud83d\udcc8 Response times and error messages\n\n**Output Structure:**\n```json\n{\n  \"summary\": {\n    \"total\": 10,\n    \"passed\": 8,\n    \"failed\": 2,\n    \"errors\": 0,\n    \"success_rate\": 80.0,\n    \"duration_seconds\": 12.5\n  },\n  \"endpoints\": [\n    {\n      \"endpoint\": \"POST /auth/login\",\n      \"total\": 3,\n      \"passed\": 3,\n      \"failed\": 0,\n      \"errors\": 0,\n      \"scenarios\": [\n        {\n          \"scenario_id\": 123,\n          \"scenario_name\": \"Valid login\",\n          \"status\": \"passed\",\n          \"execution_status\": \"success\",\n          \"http_status\": 200,\n          \"response_time_ms\": 145,\n          \"validations\": [...]\n        }\n      ]\n    }\n  ],\n  \"results\": [...]\n}\n```\n\n**Parse with jq:**\n```bash\n# Get success rate\njq '.summary.success_rate' results.json\n\n# List failed scenarios\njq '.results[] | select(.status == \"failed\") | .scenario_name' results.json\n\n# Get endpoint statistics\njq '.endpoints[] | {endpoint: .endpoint, passed: .passed, failed: .failed}' results.json\n```\n\n### Combined Output\n\nGenerate both JSON and JUnit XML:\n\n```bash\nsmarttest --system-id 123 --format json --report junit.xml > results.json\n```\n\n**Use Cases:**\n- CI/CD dashboards need JSON for metrics\n- Test reporting tools need JUnit XML\n- Both formats complement each other\n\n### Quiet Mode (Future)\n\nFor minimal output in scripts:\n\n```bash\n# Coming soon\nsmarttest --system-id 123 --quiet\n# Output: 8/10 passed (80%)\n```\n\n## Configuration\n\n### Optional Configuration File\n\nCreate `.smarttest.yml` in your project root for advanced configuration:\n\n```yaml\n# API Configuration\napi_url: \"https://api.smarttest.com\"\n\n# Execution Settings\nconcurrency: 5\ntimeout: 30\n\n# Enterprise Network Settings\nproxy:\n  http_proxy: \"http://proxy.company.com:8080\"\n  https_proxy: \"https://proxy.company.com:8080\"\n\ntls:\n  ca_bundle_path: \"/path/to/ca-bundle.pem\"\n  verify_ssl: true\n```\n\n**Configuration Options:**\n- `api_url`: SmartTest API endpoint (default: https://api.smarttest.com)\n- `concurrency`: Number of scenarios to run in parallel (default: 5)\n- `timeout`: Request timeout in seconds (default: 30)\n- `proxy`: HTTP/HTTPS proxy settings for corporate networks\n- `tls`: Custom CA bundle and SSL verification options\n\n## Authentication\n\n### SmartTest Authentication (Required)\n\nThe CLI requires a **Personal Access Token (PAT)** to authenticate with SmartTest:\n\n```bash\nexport SMARTTEST_TOKEN=your_pat_token_here\n```\n\n**How to get your PAT token:**\n1. Visit https://ai-smart-test.vercel.app/\n2. Go to **Settings** \u2192 **API Tokens**\n3. Generate a new PAT token\n4. Copy and save it securely\n\n### Zero-Credential Exposure (Advanced)\n\n**When testing APIs that require authentication**, SmartTest uses a zero-credential-exposure model to keep your API credentials secure.\n\n#### How It Works\n\n1. **SmartTest backend sends auth config references** (metadata only, no actual credentials)\n2. **CLI resolves credentials locally** from your environment variables\n3. **Credentials never leave your network** or reach SmartTest servers\n4. **Requests are made directly from your environment** to the target API\n\n#### Setting Up Target API Credentials\n\nOnly needed if you're testing APIs that require authentication:\n\n```bash\n# Bearer Token Authentication\nexport AUTH_CONFIG_123_TOKEN=your_api_bearer_token\n\n# Basic Authentication\nexport AUTH_CONFIG_456_USERNAME=api_username\nexport AUTH_CONFIG_456_PASSWORD=api_password\n\n# API Key Authentication\nexport AUTH_CONFIG_789_API_KEY=your_api_key\n```\n\n**Pattern:** `AUTH_CONFIG_{ID}_{CREDENTIAL_TYPE}`\n- `{ID}`: Auth configuration ID from SmartTest dashboard\n- `{CREDENTIAL_TYPE}`: TOKEN, USERNAME, PASSWORD, or API_KEY\n\n**Example Scenario:**\n- You're testing your company's API at `https://api.company.com`\n- The API requires a bearer token for authentication\n- SmartTest knows the API needs auth (config ID: 123)\n- You set: `export AUTH_CONFIG_123_TOKEN=company_bearer_token_xyz`\n- CLI resolves the token locally and includes it in requests\n- SmartTest never sees `company_bearer_token_xyz`\n\n## CI/CD Integration\n\n### GitHub Actions\n\n**Basic Example (SmartTest token only):**\n```yaml\nname: API Tests\n\non: [push]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Install SmartTest CLI\n        run: pip install smarttest-cli\n\n      - name: Run API Tests\n        env:\n          # PAT token for SmartTest authentication\n          SMARTTEST_TOKEN: ${{ secrets.SMARTTEST_TOKEN }}\n        run: |\n          smarttest --system-id 123 --report junit.xml\n\n      - name: Publish Test Results\n        uses: dorny/test-reporter@v1\n        if: always()\n        with:\n          name: SmartTest Results\n          path: junit.xml\n          reporter: java-junit\n```\n\n**Advanced Example (with target API credentials):**\n```yaml\nname: API Tests with Auth\n\non: [push]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Install SmartTest CLI\n        run: pip install smarttest-cli\n\n      - name: Run API Tests\n        env:\n          # PAT token for SmartTest authentication\n          SMARTTEST_TOKEN: ${{ secrets.SMARTTEST_TOKEN }}\n          \n          # Target API credentials (zero-credential exposure)\n          AUTH_CONFIG_123_TOKEN: ${{ secrets.PRODUCTION_API_TOKEN }}\n          AUTH_CONFIG_456_USERNAME: ${{ secrets.STAGING_USERNAME }}\n          AUTH_CONFIG_456_PASSWORD: ${{ secrets.STAGING_PASSWORD }}\n        run: |\n          smarttest --system-id 123 --format json --report junit.xml > results.json\n\n      - name: Check Success Rate\n        run: |\n          SUCCESS_RATE=$(jq '.summary.success_rate' results.json)\n          echo \"Test Success Rate: ${SUCCESS_RATE}%\"\n          \n          # Fail if success rate < 80%\n          if (( $(echo \"$SUCCESS_RATE < 80\" | bc -l) )); then\n            echo \"\u274c Test success rate below 80% threshold\"\n            exit 1\n          fi\n\n      - name: Upload Results\n        if: always()\n        uses: actions/upload-artifact@v3\n        with:\n          name: test-results\n          path: |\n            junit.xml\n            results.json\n\n      - name: Publish Test Report\n        uses: dorny/test-reporter@v1\n        if: always()\n        with:\n          name: API Test Results\n          path: junit.xml\n          reporter: java-junit\n```\n\n### GitLab CI\n\n**Basic Example:**\n```yaml\napi-tests:\n  stage: test\n  image: python:3.11\n  before_script:\n    - pip install smarttest-cli\n  script:\n    - smarttest --system-id 123 --report junit.xml\n  artifacts:\n    when: always\n    reports:\n      junit: junit.xml\n  variables:\n    # PAT token for SmartTest authentication\n    SMARTTEST_TOKEN: $SMARTTEST_TOKEN\n```\n\n**Advanced Example (with JSON output):**\n```yaml\napi-tests:\n  stage: test\n  image: python:3.11\n  before_script:\n    - pip install smarttest-cli\n  script:\n    - smarttest --system-id 123 --format json --report junit.xml > results.json\n    - |\n      SUCCESS_RATE=$(jq '.summary.success_rate' results.json)\n      echo \"API Tests: ${SUCCESS_RATE}% passed\"\n  artifacts:\n    when: always\n    reports:\n      junit: junit.xml\n    paths:\n      - results.json\n  variables:\n    # PAT token for SmartTest authentication\n    SMARTTEST_TOKEN: $SMARTTEST_TOKEN\n    # Target API credentials (if needed)\n    AUTH_CONFIG_123_TOKEN: $PRODUCTION_API_TOKEN\n```\n\n### Jenkins\n\n**Basic Pipeline:**\n```groovy\npipeline {\n    agent any\n    environment {\n        // PAT token for SmartTest authentication\n        SMARTTEST_TOKEN = credentials('smarttest-token')\n    }\n    stages {\n        stage('Install CLI') {\n            steps {\n                sh 'pip install smarttest-cli'\n            }\n        }\n        stage('Run API Tests') {\n            steps {\n                sh 'smarttest --system-id 123 --format json --report junit.xml > results.json'\n\n                script {\n                    def results = readJSON file: 'results.json'\n                    echo \"Success Rate: ${results.summary.success_rate}%\"\n                    echo \"Passed: ${results.summary.passed}\"\n                    echo \"Failed: ${results.summary.failed}\"\n\n                    if (results.summary.success_rate < 80) {\n                        error(\"Test success rate below 80%\")\n                    }\n                }\n            }\n            post {\n                always {\n                    junit 'junit.xml'\n                    archiveArtifacts artifacts: 'results.json', fingerprint: true\n                }\n            }\n        }\n    }\n}\n```\n\n**With Target API Credentials:**\n```groovy\npipeline {\n    agent any\n    environment {\n        // PAT token for SmartTest authentication\n        SMARTTEST_TOKEN = credentials('smarttest-token')\n        \n        // Target API credentials (zero-credential exposure)\n        AUTH_CONFIG_123_TOKEN = credentials('production-api-token')\n        AUTH_CONFIG_456_USERNAME = credentials('staging-username')\n        AUTH_CONFIG_456_PASSWORD = credentials('staging-password')\n    }\n    stages {\n        // ... same as above\n    }\n}\n```\n\n## Command Reference\n\n### Basic Commands\n\n```bash\n# Run by scenario ID\nsmarttest --scenario-id 123\n\n# Run by endpoint ID\nsmarttest --endpoint-id 456\n\n# Run by system ID\nsmarttest --system-id 789\n```\n\n### Output Options\n\n```bash\n# Terminal output (default, with colors and progress)\nsmarttest --system-id 123\n\n# JSON output (machine-readable)\nsmarttest --system-id 123 --format json\n\n# Generate JUnit XML report\nsmarttest --system-id 123 --report junit.xml\n\n# JSON + JUnit combined\nsmarttest --system-id 123 --format json --report junit.xml > results.json\n```\n\n### Configuration\n\n```bash\n# Use custom config file\nsmarttest --system-id 123 --config my-config.yml\n\n# Config file + custom output\nsmarttest --system-id 123 --config prod.yml --format json\n```\n\n### Exit Codes\n\n- **0**: All scenarios passed\n- **1**: One or more scenarios failed or had errors\n- **130**: Execution interrupted (Ctrl+C)\n\nThe exit code is the same regardless of `--format` option.\n\n## Error Handling\n\nThe CLI provides comprehensive error classification:\n\n- **\u2705 Success**: HTTP request succeeded, all validations passed\n- **\u274c Failed**: HTTP request succeeded, but validations failed\n- **\u26a0\ufe0f  Network Timeout**: Request timed out\n- **\u26a0\ufe0f  Network Error**: Connection failed\n- **\u26a0\ufe0f  Auth Error**: Authentication resolution failed\n- **\u26a0\ufe0f  Unknown Error**: Unexpected error occurred\n\n## Architecture\n\n```\n\u250c\u2500 Scenario Discovery (API with rate limiting)\n\u251c\u2500 Skip scenarios without validations\n\u251c\u2500 Concurrent Execution (max 5)\n\u2502  \u251c\u2500 Fetch definition (auth config references only)\n\u2502  \u251c\u2500 Resolve auth locally (zero credential exposure)\n\u2502  \u251c\u2500 Execute HTTP request (with comprehensive error handling)\n\u2502  \u2514\u2500 Submit results (continue on any error)\n\u2514\u2500 Generate reports and exit\n```\n\n## Troubleshooting\n\n### Authentication Issues\n\n**SmartTest Authentication:**\n```bash\n# Verify your PAT token is set\necho $SMARTTEST_TOKEN\n\n# Test connection to SmartTest\ncurl -H \"Authorization: Bearer $SMARTTEST_TOKEN\" https://api.smarttest.com/health\n```\n\n**Target API Authentication:**\n```bash\n# Verify target API credentials are set\necho $AUTH_CONFIG_123_TOKEN\n\n# Check which auth configs your scenarios use in the SmartTest dashboard\n```\n\n### Network Issues\n\n```bash\n# Test with custom config\nsmarttest --scenario-id 123 --config .smarttest.yml\n\n# Enable request debugging\nSMARTTEST_DEBUG=1 smarttest --scenario-id 123\n\n# Test single scenario first\nsmarttest --scenario-id 123 --format json\n```\n\n### Rate Limiting\n\nThe CLI automatically handles rate limiting with exponential backoff. If you encounter persistent rate limiting:\n\n1. Reduce concurrency in `.smarttest.yml`\n2. Contact support to increase rate limits\n3. Spread execution across longer time periods\n\n## Support\n\n- \ud83c\udf10 Platform: https://ai-smart-test.vercel.app/\n- \ud83d\udcda Documentation: https://ai-smart-test.vercel.app/docs\n- \ud83d\udcac Support: ai.smart.test.contact@gmail.com\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "SmartTest CLI - Execute test scenarios with secure credential handling",
    "version": "0.1.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/smarttest/smarttest-cli/issues",
        "Documentation": "https://ai-smart-test.vercel.app/docs",
        "Homepage": "https://ai-smart-test.vercel.app/",
        "Repository": "https://github.com/smarttest/smarttest-cli"
    },
    "split_keywords": [
        "testing",
        " api",
        " cli",
        " automation",
        " ci-cd",
        " continuous-integration",
        " api-testing"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "bc660f9a9ab0129d79feee522d92ccf9d038567c9e3d538eac02b0582b195f37",
                "md5": "ec23f63cc666d90d8575bc39ce3c89da",
                "sha256": "34caee64845bd7ebde29950b032e4d2987478024621d49c23c3b5cb54badaa74"
            },
            "downloads": -1,
            "filename": "smarttest_cli-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ec23f63cc666d90d8575bc39ce3c89da",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 34576,
            "upload_time": "2025-10-08T12:53:29",
            "upload_time_iso_8601": "2025-10-08T12:53:29.760561Z",
            "url": "https://files.pythonhosted.org/packages/bc/66/0f9a9ab0129d79feee522d92ccf9d038567c9e3d538eac02b0582b195f37/smarttest_cli-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8c413c94926c581d8cb3e3e7f949cf3bddca2e4afc75ef243399dd4570d8e505",
                "md5": "ccb4520b2502ad198db472135d4c8c34",
                "sha256": "75fdab5694c127c86e396373e34eb9823cf779098242b8d9d90ec290e2e5e747"
            },
            "downloads": -1,
            "filename": "smarttest_cli-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "ccb4520b2502ad198db472135d4c8c34",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 121800,
            "upload_time": "2025-10-08T12:53:31",
            "upload_time_iso_8601": "2025-10-08T12:53:31.466904Z",
            "url": "https://files.pythonhosted.org/packages/8c/41/3c94926c581d8cb3e3e7f949cf3bddca2e4afc75ef243399dd4570d8e505/smarttest_cli-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-08 12:53:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "smarttest",
    "github_project": "smarttest-cli",
    "github_not_found": true,
    "lcname": "smarttest-cli"
}
        
Elapsed time: 1.66930s