# AI-Guard: Smart Code Quality Gatekeeper
**Goal:** Stop risky PRs (especially AI-generated ones) from merging by enforcing quality, security, and test gates โ and by auto-generating targeted tests for changed code.
[](https://github.com/Manavj99/ai-guard/actions)
[](https://github.com/Manavj99/ai-guard)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## ๐ฏ Why AI-Guard?
Modern teams ship faster with AI. AI-Guard keeps quality high with automated, opinionated gates: lint, types, security, coverage, and speculative tests.
## โจ Features
- **๐ Quality Gates**: Linting (flake8), typing (mypy), security scan (bandit)
- **๐ Coverage Enforcement**: Configurable coverage thresholds (default: 80%)
- **๐ก๏ธ Security Scanning**: Automated vulnerability detection with Bandit
- **๐งช Test Generation**: Speculative test generation for changed files
- **๐ค Enhanced Test Generation**: LLM-powered, context-aware test generation with OpenAI/Anthropic
- **๐ Multi-Language Support**: JavaScript/TypeScript support with ESLint, Prettier, Jest
- **๐ PR Annotations**: Advanced GitHub integration with inline comments and review summaries
- **๐ Multi-Format Reports**: SARIF (GitHub Code Scanning), JSON (CI automation), HTML (artifacts)
- **โก Performance Optimized**: Parallel execution, caching, and performance monitoring
- **๐ Fast Execution**: Up to 54% faster with optimized subprocess handling and caching
- **๐ Performance Monitoring**: Built-in performance metrics and reporting
- **โก CI Integration**: Single-command GitHub Actions integration
- **๐๏ธ Configurable**: Easy customization via TOML configuration
## ๐ Quickstart
### Enhanced Features
AI-Guard now includes several enhanced features for better development experience:
- **๐ค Enhanced Test Generation**: Use LLMs (OpenAI GPT-4, Anthropic Claude) to generate intelligent, context-aware tests
- **๐ JavaScript/TypeScript Support**: Quality gates for JS/TS projects with ESLint, Prettier, Jest, and TypeScript
- **๐ PR Annotations**: Generate comprehensive PR reviews with inline comments and suggestions
- **โก Performance Optimizations**: Parallel execution, intelligent caching, and performance monitoring
See [ENHANCED_FEATURES.md](ENHANCED_FEATURES.md) for detailed documentation.
### Performance Features
AI-Guard includes advanced performance optimizations:
- **๐ Parallel Execution**: Run quality checks concurrently for up to 54% faster execution
- **๐พ Intelligent Caching**: Cache results for repeated operations (coverage parsing, config loading)
- **๐ Performance Monitoring**: Built-in metrics tracking and reporting
- **โฑ๏ธ Timeout Handling**: Robust subprocess management with configurable timeouts
- **๐ง Optimized Subprocess**: Enhanced subprocess handling with better error management
Use the optimized analyzer for maximum performance:
```bash
# Use optimized analyzer with parallel execution
python -m src.ai_guard.analyzer_optimized --parallel --performance-report
# Compare performance between versions
python performance_comparison.py
```
### Installation
```bash
# Clone the repository
git clone https://github.com/Manavj99/ai-guard.git
cd ai-guard
# Install dependencies
pip install -r requirements.txt
# Run tests to verify installation
pytest -q
```
### Basic Usage
Run quality checks with default settings:
```bash
python -m src.ai_guard check
```
Run with custom coverage threshold:
```bash
python -m src.ai_guard check --min-cov 90 --skip-tests
```
Run with enhanced test generation and PR annotations:
```bash
# Enhanced test generation with OpenAI
python -m src.ai_guard.analyzer \
--enhanced-testgen \
--llm-provider openai \
--pr-annotations \
--event "$GITHUB_EVENT_PATH"
# JavaScript/TypeScript quality checks
python -m src.ai_guard.language_support.js_ts_support \
--quality \
--files src/**/*.js src/**/*.ts
```
Generate different report formats:
```bash
# SARIF for GitHub Code Scanning (default)
python -m src.ai_guard check --min-cov 80 --skip-tests --sarif ai-guard.sarif
# JSON for CI automation
python -m src.ai_guard check --min-cov 80 --skip-tests --report-format json
# HTML for CI artifacts
python -m src.ai_guard check --min-cov 80 --skip-tests --report-format html
# Custom report path
python -m src.ai_guard check --min-cov 80 --skip-tests --report-format html --report-path reports/quality.html
```
### Using Docker
Build the Docker image:
```bash
# Build image
make docker
# or manually:
docker build -t ai-guard:latest .
```
Run quality checks in Docker:
```bash
# Full scan with tests & SARIF
docker run --rm -v "$PWD":/workspace ai-guard:latest \
--min-cov 85 \
--sarif /workspace/ai-guard.sarif
# Quick scan (no tests) on the repo
docker run --rm -v "$PWD":/workspace ai-guard:latest \
--skip-tests \
--sarif /workspace/ai-guard.sarif
# Using make target
make docker-run
```
**Why Docker?**
- **Reproducible**: Exact Python + toolchain versions
- **Portable**: Works the same everywhere (laptop, CI, cloud)
- **Secure**: Non-root user, minimal base image
- **Fast**: Only changed files get type/lint checks with `--event`
## โ๏ธ Configuration
Create an `ai-guard.toml` file in your project root:
```toml
[gates]
min_coverage = 80
```
## ๐ง CLI Options
```bash
python -m src.ai_guard check [OPTIONS]
Options:
--min-cov INTEGER Override min coverage % [default: 80]
--skip-tests Skip running tests (useful for CI)
--event PATH Path to GitHub event JSON
--report-format FORMAT Output format: sarif, json, or html [default: sarif]
--report-path PATH Path to write the report (default depends on format)
--sarif PATH (Deprecated) Output SARIF path; use --report-format/--report-path
--performance-report Generate performance metrics report
--help Show this message and exit
```
### Optimized Analyzer Options
```bash
python -m src.ai_guard.analyzer_optimized [OPTIONS]
Additional Options:
--parallel Enable parallel execution of quality checks
--performance-report Generate detailed performance metrics
```
**Report Formats:**
- **`sarif`**: GitHub Code Scanning compatible SARIF output (default)
- **`json`**: Machine-readable JSON summary with gate results and findings
- **`html`**: Human-friendly HTML report for CI artifacts and dashboards
**Default Report Paths:**
- `sarif`: `ai-guard.sarif`
- `json`: `ai-guard.json`
- `html`: `ai-guard.html`
## ๐ Example Outputs
### Console Output
**Passing run:**
```
Changed Python files: ['src/foo/utils.py']
Lint (flake8): PASS
Static types (mypy): PASS
Security (bandit): PASS (0 high findings)
Coverage: PASS (86% โฅ min 85%)
Summary: all gates passed โ
```
**Failing run:**
```
Changed Python files: ['src/foo/handler.py']
Lint (flake8): PASS
Static types (mypy): FAIL
src/foo/handler.py:42: error: Argument 1 to "process" has incompatible type "str"; expected "int" [arg-type]
Security (bandit): PASS (0 high findings)
Coverage: FAIL (78% < min 85%)
Summary:
โ Static types (mypy)
โ Coverage (min 85%)
Exit code: 1
```
### Report Outputs
AI-Guard supports multiple output formats for different use cases:
#### SARIF Output (Default)
GitHub Code Scanning compatible SARIF files:
```json
{
"version": "2.1.0",
"runs": [
{
"tool": { "driver": { "name": "AI-Guard", "version": "0.1.0" } },
"results": [
{
"ruleId": "mypy:arg-type",
"level": "error",
"message": { "text": "Argument 1 to 'process' has incompatible type 'str'; expected 'int'" },
"locations": [
{
"physicalLocation": {
"artifactLocation": { "uri": "src/foo/handler.py" },
"region": { "startLine": 42 }
}
}
]
}
]
}
]
}
```
#### JSON Output
Machine-readable summary for CI ingestion and automation:
```json
{
"version": "1.0",
"summary": {
"passed": false,
"gates": [
{"name": "Lint (flake8)", "passed": true, "details": ""},
{"name": "Static types (mypy)", "passed": false, "details": "mypy not found"},
{"name": "Coverage", "passed": true, "details": "85% >= 80%"}
]
},
"findings": [
{
"rule_id": "mypy:arg-type",
"level": "error",
"message": "Argument 1 to 'process' has incompatible type",
"path": "src/foo/handler.py",
"line": 42
}
]
}
```
#### HTML Output
Human-friendly report for CI artifacts and dashboards:
```bash
# Generate HTML report
ai-guard --report-format html --report-path ai-guard.html --min-cov 85
# Upload as CI artifact (GitHub Actions example)
- name: Upload HTML report
uses: actions/upload-artifact@v4
with:
name: ai-guard-report
path: ai-guard.html
```
## ๐ GitHub Integration
### Automatic PR Checks
AI-Guard automatically runs on every Pull Request to `main` or `master` branches:
1. **Linting**: Enforces flake8 standards
2. **Type Checking**: Runs mypy for static type validation
3. **Security Scan**: Executes Bandit security analysis
4. **Test Coverage**: Ensures minimum coverage threshold
5. **Quality Gates**: Comprehensive quality assessment
6. **SARIF Upload**: Results integrated with GitHub Code Scanning
### Manual Workflow Trigger
You can manually trigger the workflow from the GitHub Actions tab:
1. Go to **Actions** โ **AI-Guard**
2. Click **Run workflow**
3. Select branch and click **Run workflow**
### Using Docker in GitHub Actions
If you prefer containerized jobs, you can use the Docker image:
```yaml
name: AI-Guard
on:
pull_request:
push:
branches: [ main ]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build AI-Guard image
run: docker build -t ai-guard:latest .
# Pass the GitHub event JSON so AI-Guard scopes to changed files
- name: Run AI-Guard
run: |
docker run --rm \
-v "$GITHUB_WORKSPACE":/workspace \
-v "$GITHUB_EVENT_PATH":/tmp/event.json:ro \
ai-guard:latest \
--event /tmp/event.json \
--min-cov 85 \
--sarif /workspace/ai-guard.sarif
# Surface SARIF in the Security tab
- name: Upload SARIF to code scanning
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ai-guard.sarif
```
This will fail the job (and block the PR) if any gate fails, and the SARIF will appear in **Security โ Code scanning alerts**.
### Multi-Format Reporting in CI
AI-Guard supports multiple output formats for different CI needs:
#### JSON Reports for Automation
Generate machine-readable reports for CI decision making:
```yaml
- name: Run AI-Guard (JSON)
run: |
python -m src.ai_guard.analyzer \
--report-format json \
--report-path ai-guard.json \
--min-cov 85 \
--skip-tests
- name: Parse results for CI logic
run: |
if python -c "import json; data=json.load(open('ai-guard.json')); exit(0 if data['summary']['passed'] else 1)"; then
echo "All gates passed"
else
echo "Some gates failed"
exit 1
fi
```
#### HTML Reports for Artifacts
Generate human-friendly reports for CI artifacts:
```yaml
- name: Run AI-Guard (HTML)
run: |
python -m src.ai_guard.analyzer \
--report-format html \
--report-path ai-guard.html \
--min-cov 85 \
--skip-tests
- name: Upload HTML report artifact
uses: actions/upload-artifact@v4
with:
name: ai-guard-report
path: ai-guard.html
retention-days: 30
```
#### Combined Workflow Example
Run multiple formats in a single workflow:
```yaml
- name: Run AI-Guard (All formats)
run: |
python -m src.ai_guard.analyzer \
--report-format sarif \
--report-path ai-guard.sarif \
--min-cov 85 \
--skip-tests
python -m src.ai_guard.analyzer \
--report-format json \
--report-path ai-guard.json \
--min-cov 85 \
--skip-tests
python -m src.ai_guard.analyzer \
--report-format html \
--report-path ai-guard.html \
--min-cov 85 \
--skip-tests
- name: Upload all reports
uses: actions/upload-artifact@v4
with:
name: ai-guard-reports
path: |
ai-guard.sarif
ai-guard.json
ai-guard.html
```
### Workflow Status
- โ
**Green**: All quality gates passed
- โ **Red**: One or more quality gates failed
- ๐ก **Yellow**: Workflow in progress
## ๐ Current Status
- **Test Coverage**: 83% (518 statements, 76 missing) - Core analyzer module
- **Quality Gates**: All passing โ
(140 tests passed)
- **Security Scan**: Bandit integration active
- **SARIF Output**: GitHub Code Scanning compatible
- **GitHub Actions**: Fully configured and tested
- **Recent Improvements**: Enhanced test coverage, error handling, and reliability
## ๐๏ธ Project Structure
```
ai-guard/
โโโ src/ai_guard/ # Core package
โ โโโ analyzer.py # Main quality gate orchestrator
โ โโโ analyzer_optimized.py # Optimized analyzer with performance features
โ โโโ config.py # Configuration management
โ โโโ diff_parser.py # Git diff parsing
โ โโโ performance.py # Performance monitoring and optimization utilities
โ โโโ report.py # Core reporting and result aggregation
โ โโโ report_json.py # JSON report generation
โ โโโ report_html.py # HTML report generation
โ โโโ sarif_report.py # SARIF output generation
โ โโโ security_scanner.py # Security scanning
โ โโโ tests_runner.py # Test execution
โโโ tests/ # Test suite
โโโ .github/workflows/ # GitHub Actions
โโโ ai-guard.toml # Configuration
โโโ performance_comparison.py # Performance benchmarking script
โโโ requirements.txt # Dependencies
```
## ๐งช Testing
Run the complete test suite:
```bash
# Run all tests with coverage
pytest --cov=src --cov-report=term-missing
# Run specific test modules
pytest tests/unit/test_analyzer.py -v
# Run with coverage report
pytest --cov=src --cov-report=html
```
## ๐ Security
- **Bandit Integration**: Automated security vulnerability scanning
- **Dependency Audit**: pip-audit for known vulnerabilities
- **SARIF Security Events**: GitHub Code Scanning integration
- **Configurable Severity**: Adjustable security thresholds
## ๐ค Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details.
### Development Setup
```bash
# Install development dependencies
pip install -r requirements.txt
# Install pre-commit hooks
pre-commit install
# Run quality checks
make check
# Run tests
make test
```
## ๐ Roadmap
- [x] Parse PR diffs to target functions precisely
- [x] SARIF output + GitHub Code Scanning integration
- [x] Comprehensive quality gates
- [x] Performance optimizations and monitoring
- [x] Parallel execution and intelligent caching
- [x] LLM-assisted test synthesis (opt-in)
- [x] Language adapters (JS/TS support)
- [x] Advanced PR annotations
- [ ] Additional language adapters (Go, Rust)
- [ ] Custom rule engine
- [ ] Distributed execution
- [ ] Machine learning-based optimization
## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## ๐ Support
- **Issues**: [GitHub Issues](https://github.com/Manavj99/ai-guard/issues)
- **Security**: [SECURITY.md](SECURITY.md)
- **Contributing**: [CONTRIBUTING.md](CONTRIBUTING.md)
---
**Made with โค๏ธ for better code quality**
Raw data
{
"_id": null,
"home_page": "https://github.com/Manavj99/ai-guard",
"name": "smart-ai-guard",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "code-quality, ci-cd, testing, security, ai",
"author": "AI-Guard Contributors",
"author_email": "contributors@ai-guard.dev",
"download_url": "https://files.pythonhosted.org/packages/d6/56/0bad01eb6fe0c32bab7da64d050e5301744675a34b2e39db6ed3d0ba0456/smart_ai_guard-0.1.0.tar.gz",
"platform": null,
"description": "# AI-Guard: Smart Code Quality Gatekeeper\n\n**Goal:** Stop risky PRs (especially AI-generated ones) from merging by enforcing quality, security, and test gates \u2014 and by auto-generating targeted tests for changed code.\n\n[](https://github.com/Manavj99/ai-guard/actions)\n[](https://github.com/Manavj99/ai-guard)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n\n## \ud83c\udfaf Why AI-Guard?\n\nModern teams ship faster with AI. AI-Guard keeps quality high with automated, opinionated gates: lint, types, security, coverage, and speculative tests.\n\n## \u2728 Features\n\n- **\ud83d\udd0d Quality Gates**: Linting (flake8), typing (mypy), security scan (bandit)\n- **\ud83d\udcca Coverage Enforcement**: Configurable coverage thresholds (default: 80%)\n- **\ud83d\udee1\ufe0f Security Scanning**: Automated vulnerability detection with Bandit\n- **\ud83e\uddea Test Generation**: Speculative test generation for changed files\n- **\ud83e\udd16 Enhanced Test Generation**: LLM-powered, context-aware test generation with OpenAI/Anthropic\n- **\ud83c\udf10 Multi-Language Support**: JavaScript/TypeScript support with ESLint, Prettier, Jest\n- **\ud83d\udcdd PR Annotations**: Advanced GitHub integration with inline comments and review summaries\n- **\ud83d\udccb Multi-Format Reports**: SARIF (GitHub Code Scanning), JSON (CI automation), HTML (artifacts)\n- **\u26a1 Performance Optimized**: Parallel execution, caching, and performance monitoring\n- **\ud83d\ude80 Fast Execution**: Up to 54% faster with optimized subprocess handling and caching\n- **\ud83d\udcc8 Performance Monitoring**: Built-in performance metrics and reporting\n- **\u26a1 CI Integration**: Single-command GitHub Actions integration\n- **\ud83c\udf9b\ufe0f Configurable**: Easy customization via TOML configuration\n\n## \ud83d\ude80 Quickstart\n\n### Enhanced Features\n\nAI-Guard now includes several enhanced features for better development experience:\n\n- **\ud83e\udd16 Enhanced Test Generation**: Use LLMs (OpenAI GPT-4, Anthropic Claude) to generate intelligent, context-aware tests\n- **\ud83c\udf10 JavaScript/TypeScript Support**: Quality gates for JS/TS projects with ESLint, Prettier, Jest, and TypeScript\n- **\ud83d\udcdd PR Annotations**: Generate comprehensive PR reviews with inline comments and suggestions\n- **\u26a1 Performance Optimizations**: Parallel execution, intelligent caching, and performance monitoring\n\nSee [ENHANCED_FEATURES.md](ENHANCED_FEATURES.md) for detailed documentation.\n\n### Performance Features\n\nAI-Guard includes advanced performance optimizations:\n\n- **\ud83d\ude80 Parallel Execution**: Run quality checks concurrently for up to 54% faster execution\n- **\ud83d\udcbe Intelligent Caching**: Cache results for repeated operations (coverage parsing, config loading)\n- **\ud83d\udcca Performance Monitoring**: Built-in metrics tracking and reporting\n- **\u23f1\ufe0f Timeout Handling**: Robust subprocess management with configurable timeouts\n- **\ud83d\udd27 Optimized Subprocess**: Enhanced subprocess handling with better error management\n\nUse the optimized analyzer for maximum performance:\n\n```bash\n# Use optimized analyzer with parallel execution\npython -m src.ai_guard.analyzer_optimized --parallel --performance-report\n\n# Compare performance between versions\npython performance_comparison.py\n```\n\n### Installation\n\n```bash\n# Clone the repository\ngit clone https://github.com/Manavj99/ai-guard.git\ncd ai-guard\n\n# Install dependencies\npip install -r requirements.txt\n\n# Run tests to verify installation\npytest -q\n```\n\n### Basic Usage\n\nRun quality checks with default settings:\n\n```bash\npython -m src.ai_guard check\n```\n\nRun with custom coverage threshold:\n\n```bash\npython -m src.ai_guard check --min-cov 90 --skip-tests\n```\n\nRun with enhanced test generation and PR annotations:\n\n```bash\n# Enhanced test generation with OpenAI\npython -m src.ai_guard.analyzer \\\n --enhanced-testgen \\\n --llm-provider openai \\\n --pr-annotations \\\n --event \"$GITHUB_EVENT_PATH\"\n\n# JavaScript/TypeScript quality checks\npython -m src.ai_guard.language_support.js_ts_support \\\n --quality \\\n --files src/**/*.js src/**/*.ts\n```\n\nGenerate different report formats:\n\n```bash\n# SARIF for GitHub Code Scanning (default)\npython -m src.ai_guard check --min-cov 80 --skip-tests --sarif ai-guard.sarif\n\n# JSON for CI automation\npython -m src.ai_guard check --min-cov 80 --skip-tests --report-format json\n\n# HTML for CI artifacts\npython -m src.ai_guard check --min-cov 80 --skip-tests --report-format html\n\n# Custom report path\npython -m src.ai_guard check --min-cov 80 --skip-tests --report-format html --report-path reports/quality.html\n```\n\n### Using Docker\n\nBuild the Docker image:\n\n```bash\n# Build image\nmake docker\n# or manually:\ndocker build -t ai-guard:latest .\n```\n\nRun quality checks in Docker:\n\n```bash\n# Full scan with tests & SARIF\ndocker run --rm -v \"$PWD\":/workspace ai-guard:latest \\\n --min-cov 85 \\\n --sarif /workspace/ai-guard.sarif\n\n# Quick scan (no tests) on the repo\ndocker run --rm -v \"$PWD\":/workspace ai-guard:latest \\\n --skip-tests \\\n --sarif /workspace/ai-guard.sarif\n\n# Using make target\nmake docker-run\n```\n\n**Why Docker?**\n- **Reproducible**: Exact Python + toolchain versions\n- **Portable**: Works the same everywhere (laptop, CI, cloud)\n- **Secure**: Non-root user, minimal base image\n- **Fast**: Only changed files get type/lint checks with `--event`\n\n## \u2699\ufe0f Configuration\n\nCreate an `ai-guard.toml` file in your project root:\n\n```toml\n[gates]\nmin_coverage = 80\n```\n\n## \ud83d\udd27 CLI Options\n\n```bash\npython -m src.ai_guard check [OPTIONS]\n\nOptions:\n --min-cov INTEGER Override min coverage % [default: 80]\n --skip-tests Skip running tests (useful for CI)\n --event PATH Path to GitHub event JSON\n --report-format FORMAT Output format: sarif, json, or html [default: sarif]\n --report-path PATH Path to write the report (default depends on format)\n --sarif PATH (Deprecated) Output SARIF path; use --report-format/--report-path\n --performance-report Generate performance metrics report\n --help Show this message and exit\n```\n\n### Optimized Analyzer Options\n\n```bash\npython -m src.ai_guard.analyzer_optimized [OPTIONS]\n\nAdditional Options:\n --parallel Enable parallel execution of quality checks\n --performance-report Generate detailed performance metrics\n```\n\n**Report Formats:**\n- **`sarif`**: GitHub Code Scanning compatible SARIF output (default)\n- **`json`**: Machine-readable JSON summary with gate results and findings\n- **`html`**: Human-friendly HTML report for CI artifacts and dashboards\n\n**Default Report Paths:**\n- `sarif`: `ai-guard.sarif`\n- `json`: `ai-guard.json` \n- `html`: `ai-guard.html`\n\n## \ud83d\udccb Example Outputs\n\n### Console Output\n\n**Passing run:**\n```\nChanged Python files: ['src/foo/utils.py']\nLint (flake8): PASS\nStatic types (mypy): PASS\nSecurity (bandit): PASS (0 high findings)\nCoverage: PASS (86% \u2265 min 85%)\nSummary: all gates passed \u2705\n```\n\n**Failing run:**\n```\nChanged Python files: ['src/foo/handler.py']\nLint (flake8): PASS\nStatic types (mypy): FAIL\n src/foo/handler.py:42: error: Argument 1 to \"process\" has incompatible type \"str\"; expected \"int\" [arg-type]\nSecurity (bandit): PASS (0 high findings)\nCoverage: FAIL (78% < min 85%)\n\nSummary:\n\u2717 Static types (mypy)\n\u2717 Coverage (min 85%)\nExit code: 1\n```\n\n### Report Outputs\n\nAI-Guard supports multiple output formats for different use cases:\n\n#### SARIF Output (Default)\n\nGitHub Code Scanning compatible SARIF files:\n\n```json\n{\n \"version\": \"2.1.0\",\n \"runs\": [\n {\n \"tool\": { \"driver\": { \"name\": \"AI-Guard\", \"version\": \"0.1.0\" } },\n \"results\": [\n {\n \"ruleId\": \"mypy:arg-type\",\n \"level\": \"error\",\n \"message\": { \"text\": \"Argument 1 to 'process' has incompatible type 'str'; expected 'int'\" },\n \"locations\": [\n {\n \"physicalLocation\": {\n \"artifactLocation\": { \"uri\": \"src/foo/handler.py\" },\n \"region\": { \"startLine\": 42 }\n }\n }\n ]\n }\n ]\n }\n ]\n}\n```\n\n#### JSON Output\n\nMachine-readable summary for CI ingestion and automation:\n\n```json\n{\n \"version\": \"1.0\",\n \"summary\": {\n \"passed\": false,\n \"gates\": [\n {\"name\": \"Lint (flake8)\", \"passed\": true, \"details\": \"\"},\n {\"name\": \"Static types (mypy)\", \"passed\": false, \"details\": \"mypy not found\"},\n {\"name\": \"Coverage\", \"passed\": true, \"details\": \"85% >= 80%\"}\n ]\n },\n \"findings\": [\n {\n \"rule_id\": \"mypy:arg-type\",\n \"level\": \"error\", \n \"message\": \"Argument 1 to 'process' has incompatible type\",\n \"path\": \"src/foo/handler.py\",\n \"line\": 42\n }\n ]\n}\n```\n\n#### HTML Output\n\nHuman-friendly report for CI artifacts and dashboards:\n\n```bash\n# Generate HTML report\nai-guard --report-format html --report-path ai-guard.html --min-cov 85\n\n# Upload as CI artifact (GitHub Actions example)\n- name: Upload HTML report\n uses: actions/upload-artifact@v4\n with:\n name: ai-guard-report\n path: ai-guard.html\n```\n\n## \ud83d\udc19 GitHub Integration\n\n### Automatic PR Checks\n\nAI-Guard automatically runs on every Pull Request to `main` or `master` branches:\n\n1. **Linting**: Enforces flake8 standards\n2. **Type Checking**: Runs mypy for static type validation\n3. **Security Scan**: Executes Bandit security analysis\n4. **Test Coverage**: Ensures minimum coverage threshold\n5. **Quality Gates**: Comprehensive quality assessment\n6. **SARIF Upload**: Results integrated with GitHub Code Scanning\n\n### Manual Workflow Trigger\n\nYou can manually trigger the workflow from the GitHub Actions tab:\n\n1. Go to **Actions** \u2192 **AI-Guard**\n2. Click **Run workflow**\n3. Select branch and click **Run workflow**\n\n### Using Docker in GitHub Actions\n\nIf you prefer containerized jobs, you can use the Docker image:\n\n```yaml\nname: AI-Guard\non:\n pull_request:\n push:\n branches: [ main ]\n\njobs:\n scan:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - name: Build AI-Guard image\n run: docker build -t ai-guard:latest .\n\n # Pass the GitHub event JSON so AI-Guard scopes to changed files\n - name: Run AI-Guard\n run: |\n docker run --rm \\\n -v \"$GITHUB_WORKSPACE\":/workspace \\\n -v \"$GITHUB_EVENT_PATH\":/tmp/event.json:ro \\\n ai-guard:latest \\\n --event /tmp/event.json \\\n --min-cov 85 \\\n --sarif /workspace/ai-guard.sarif\n\n # Surface SARIF in the Security tab\n - name: Upload SARIF to code scanning\n uses: github/codeql-action/upload-sarif@v3\n with:\n sarif_file: ai-guard.sarif\n```\n\nThis will fail the job (and block the PR) if any gate fails, and the SARIF will appear in **Security \u2192 Code scanning alerts**.\n\n### Multi-Format Reporting in CI\n\nAI-Guard supports multiple output formats for different CI needs:\n\n#### JSON Reports for Automation\n\nGenerate machine-readable reports for CI decision making:\n\n```yaml\n- name: Run AI-Guard (JSON)\n run: |\n python -m src.ai_guard.analyzer \\\n --report-format json \\\n --report-path ai-guard.json \\\n --min-cov 85 \\\n --skip-tests\n\n- name: Parse results for CI logic\n run: |\n if python -c \"import json; data=json.load(open('ai-guard.json')); exit(0 if data['summary']['passed'] else 1)\"; then\n echo \"All gates passed\"\n else\n echo \"Some gates failed\"\n exit 1\n fi\n```\n\n#### HTML Reports for Artifacts\n\nGenerate human-friendly reports for CI artifacts:\n\n```yaml\n- name: Run AI-Guard (HTML)\n run: |\n python -m src.ai_guard.analyzer \\\n --report-format html \\\n --report-path ai-guard.html \\\n --min-cov 85 \\\n --skip-tests\n\n- name: Upload HTML report artifact\n uses: actions/upload-artifact@v4\n with:\n name: ai-guard-report\n path: ai-guard.html\n retention-days: 30\n```\n\n#### Combined Workflow Example\n\nRun multiple formats in a single workflow:\n\n```yaml\n- name: Run AI-Guard (All formats)\n run: |\n python -m src.ai_guard.analyzer \\\n --report-format sarif \\\n --report-path ai-guard.sarif \\\n --min-cov 85 \\\n --skip-tests\n \n python -m src.ai_guard.analyzer \\\n --report-format json \\\n --report-path ai-guard.json \\\n --min-cov 85 \\\n --skip-tests\n \n python -m src.ai_guard.analyzer \\\n --report-format html \\\n --report-path ai-guard.html \\\n --min-cov 85 \\\n --skip-tests\n\n- name: Upload all reports\n uses: actions/upload-artifact@v4\n with:\n name: ai-guard-reports\n path: |\n ai-guard.sarif\n ai-guard.json\n ai-guard.html\n```\n\n### Workflow Status\n\n- \u2705 **Green**: All quality gates passed\n- \u274c **Red**: One or more quality gates failed\n- \ud83d\udfe1 **Yellow**: Workflow in progress\n\n## \ud83d\udcca Current Status\n\n- **Test Coverage**: 83% (518 statements, 76 missing) - Core analyzer module\n- **Quality Gates**: All passing \u2705 (140 tests passed)\n- **Security Scan**: Bandit integration active\n- **SARIF Output**: GitHub Code Scanning compatible\n- **GitHub Actions**: Fully configured and tested\n- **Recent Improvements**: Enhanced test coverage, error handling, and reliability\n\n## \ud83c\udfd7\ufe0f Project Structure\n\n```\nai-guard/\n\u251c\u2500\u2500 src/ai_guard/ # Core package\n\u2502 \u251c\u2500\u2500 analyzer.py # Main quality gate orchestrator\n\u2502 \u251c\u2500\u2500 analyzer_optimized.py # Optimized analyzer with performance features\n\u2502 \u251c\u2500\u2500 config.py # Configuration management\n\u2502 \u251c\u2500\u2500 diff_parser.py # Git diff parsing\n\u2502 \u251c\u2500\u2500 performance.py # Performance monitoring and optimization utilities\n\u2502 \u251c\u2500\u2500 report.py # Core reporting and result aggregation\n\u2502 \u251c\u2500\u2500 report_json.py # JSON report generation\n\u2502 \u251c\u2500\u2500 report_html.py # HTML report generation\n\u2502 \u251c\u2500\u2500 sarif_report.py # SARIF output generation\n\u2502 \u251c\u2500\u2500 security_scanner.py # Security scanning\n\u2502 \u2514\u2500\u2500 tests_runner.py # Test execution\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 .github/workflows/ # GitHub Actions\n\u251c\u2500\u2500 ai-guard.toml # Configuration\n\u251c\u2500\u2500 performance_comparison.py # Performance benchmarking script\n\u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n## \ud83e\uddea Testing\n\nRun the complete test suite:\n\n```bash\n# Run all tests with coverage\npytest --cov=src --cov-report=term-missing\n\n# Run specific test modules\npytest tests/unit/test_analyzer.py -v\n\n# Run with coverage report\npytest --cov=src --cov-report=html\n```\n\n## \ud83d\udd12 Security\n\n- **Bandit Integration**: Automated security vulnerability scanning\n- **Dependency Audit**: pip-audit for known vulnerabilities\n- **SARIF Security Events**: GitHub Code Scanning integration\n- **Configurable Severity**: Adjustable security thresholds\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details.\n\n### Development Setup\n\n```bash\n# Install development dependencies\npip install -r requirements.txt\n\n# Install pre-commit hooks\npre-commit install\n\n# Run quality checks\nmake check\n\n# Run tests\nmake test\n```\n\n## \ud83d\udcc8 Roadmap\n\n- [x] Parse PR diffs to target functions precisely\n- [x] SARIF output + GitHub Code Scanning integration\n- [x] Comprehensive quality gates\n- [x] Performance optimizations and monitoring\n- [x] Parallel execution and intelligent caching\n- [x] LLM-assisted test synthesis (opt-in)\n- [x] Language adapters (JS/TS support)\n- [x] Advanced PR annotations\n- [ ] Additional language adapters (Go, Rust)\n- [ ] Custom rule engine\n- [ ] Distributed execution\n- [ ] Machine learning-based optimization\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83c\udd98 Support\n\n- **Issues**: [GitHub Issues](https://github.com/Manavj99/ai-guard/issues)\n- **Security**: [SECURITY.md](SECURITY.md)\n- **Contributing**: [CONTRIBUTING.md](CONTRIBUTING.md)\n\n---\n\n**Made with \u2764\ufe0f for better code quality** \n",
"bugtrack_url": null,
"license": null,
"summary": "Smart Code Quality Gatekeeper for AI-generated code",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/Manavj99/ai-guard"
},
"split_keywords": [
"code-quality",
" ci-cd",
" testing",
" security",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ba2c5c1cedbd688670305d07c4629c5decb0968d7afbea3604c0b3ffab0b1bd4",
"md5": "12957ffb2b11d0b1aeeab368e6dc25fe",
"sha256": "4d63aab872e59c0049de44e0c60df211400c5e3b1d6ce6eda58cc7e4941acbff"
},
"downloads": -1,
"filename": "smart_ai_guard-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "12957ffb2b11d0b1aeeab368e6dc25fe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 73482,
"upload_time": "2025-09-08T22:40:46",
"upload_time_iso_8601": "2025-09-08T22:40:46.463713Z",
"url": "https://files.pythonhosted.org/packages/ba/2c/5c1cedbd688670305d07c4629c5decb0968d7afbea3604c0b3ffab0b1bd4/smart_ai_guard-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "d6560bad01eb6fe0c32bab7da64d050e5301744675a34b2e39db6ed3d0ba0456",
"md5": "f5c0d66fe9bbee6b996e2d288f596975",
"sha256": "65b5441fe588e1ddaa917efb4c8b900b934d5b0b8bdd311d8b556c4403e7f879"
},
"downloads": -1,
"filename": "smart_ai_guard-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "f5c0d66fe9bbee6b996e2d288f596975",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 69390,
"upload_time": "2025-09-08T22:40:47",
"upload_time_iso_8601": "2025-09-08T22:40:47.855396Z",
"url": "https://files.pythonhosted.org/packages/d6/56/0bad01eb6fe0c32bab7da64d050e5301744675a34b2e39db6ed3d0ba0456/smart_ai_guard-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-08 22:40:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Manavj99",
"github_project": "ai-guard",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "typer",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"13.3.5"
]
]
},
{
"name": "tomli",
"specs": [
[
">=",
"2.0.1"
]
]
},
{
"name": "defusedxml",
"specs": [
[
">=",
"0.7.1"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.7"
]
]
},
{
"name": "pygithub",
"specs": [
[
"==",
"2.4.0"
]
]
},
{
"name": "pip-audit",
"specs": [
[
"==",
"2.7.3"
]
]
},
{
"name": "black",
"specs": [
[
"==",
"24.8.0"
]
]
},
{
"name": "autopep8",
"specs": [
[
"==",
"2.3.2"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "anthropic",
"specs": [
[
">=",
"0.18.0"
]
]
},
{
"name": "flake8",
"specs": [
[
">=",
"7.1.1"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"1.11.1"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"8.3.2"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"5.0.0"
]
]
},
{
"name": "bandit",
"specs": [
[
">=",
"1.7.9"
]
]
},
{
"name": "hypothesis",
"specs": [
[
">=",
"6.112.0"
]
]
},
{
"name": "pre-commit",
"specs": [
[
">=",
"3.0.0"
]
]
}
],
"lcname": "smart-ai-guard"
}