gitlab-pipeline-analyzer


Namegitlab-pipeline-analyzer JSON
Version 0.10.0 PyPI version JSON
download
home_pageNone
SummaryFastMCP server for analyzing GitLab CI/CD pipeline failures
upload_time2025-09-08 15:31:45
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords gitlab ci/cd pipeline analysis mcp fastmcp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # GitLab Pipeline Analyzer MCP Server

A comprehensive FastMCP server that analyzes GitLab CI/CD pipeline failures with intelligent caching, structured resources, and guided prompts for AI agents.

## โœจ Key Features

### ๐Ÿ” **Comprehensive Analysis**

- Deep pipeline failure analysis with error extraction and merge request context
- Intelligent error categorization and pattern detection
- Support for pytest, build, and general CI/CD failures
- **โœจ NEW in v0.8.0**: Complete merge request information integration with Jira ticket extraction
- **๐ŸŽฏ NEW in v0.8.0**: Smart filtering of MR data based on pipeline type (only shows MR data for actual MR pipelines)
- **๐Ÿ“ NEW in v0.8.2**: Code review integration - automatically includes discussions, notes, approval status, and unresolved feedback from merge requests for AI-powered context-aware fixes

### ๐Ÿ’พ **Intelligent Caching**

- SQLite-based caching for faster analysis
- Automatic cache invalidation and cleanup
- Significant performance improvements (90% reduction in API calls)

### ๐Ÿ“ฆ **MCP Resources & Smart Data Access**

- **Resource-First Architecture**: Always try `get_mcp_resource` before running analysis tools
- **Efficient Caching**: Resources serve cached data instantly without re-analysis
- **Smart URIs**: Intuitive resource patterns like `gl://pipeline/{project_id}/{pipeline_id}`
- **Navigation Links**: Related resources automatically suggested in responses
- **Pipeline Resources**: Complete pipeline overview with conditional MR data
- **Job Resources**: Individual job analysis with error extraction
- **File Resources**: File-specific error details with trace context
- **Error Resources**: Detailed error analysis with fix guidance

### ๐ŸŽฏ **Intelligent Prompts & Workflows**

- **13+ Specialized Prompts** across 5 categories for comprehensive CI/CD guidance
- **Advanced Workflows**: `investigation-wizard`, `pipeline-comparison`, `fix-strategy-planner`
- **Performance Optimization**: `performance-investigation`, `ci-cd-optimization`, `resource-efficiency`
- **Educational & Learning**: `learning-path`, `knowledge-sharing`, `mentoring-guide`
- **Role-based Customization**: Adapts to user expertise (Beginner/Intermediate/Expert/SRE/Manager)
- **Progressive Complexity**: Multi-step workflows with context continuity

### ๐Ÿš€ **Multiple Transport Protocols**

- STDIO (default) - For local tools and integrations
- HTTP - For web deployments and remote access
- SSE - For real-time streaming connections

## Architecture Overview

```
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   MCP Client    โ”‚    โ”‚   Cache Layer    โ”‚    โ”‚  GitLab API     โ”‚
โ”‚    (Agents)     โ”‚โ—„โ”€โ”€โ–บโ”‚   (SQLite DB)    โ”‚โ—„โ”€โ”€โ–บโ”‚   (External)    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚                       โ”‚                       โ”‚
         โ–ผ                       โ–ผ                       โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    MCP Server                                   โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚   Resources     โ”‚     Tools       โ”‚       Prompts              โ”‚
โ”‚                 โ”‚                 โ”‚                             โ”‚
โ”‚ โ€ข Pipeline      โ”‚ โ€ข Complex       โ”‚ โ€ข Advanced Workflows       โ”‚
โ”‚ โ€ข Job           โ”‚   Analysis      โ”‚ โ€ข Performance Optimization โ”‚
โ”‚ โ€ข Analysis      โ”‚ โ€ข Repository    โ”‚ โ€ข Educational & Learning   โ”‚
โ”‚ โ€ข Error         โ”‚   Search        โ”‚ โ€ข Investigation & Debug    โ”‚
โ”‚                 โ”‚ โ€ข Pagination    โ”‚ โ€ข Role-based Guidance      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
```

## Installation

```bash
# Install dependencies
uv pip install -e .

# Or with pip
pip install -e .
```

## Configuration

Set the following environment variables:

```bash
# Required: GitLab connection settings
export GITLAB_URL="https://gitlab.com" # Your GitLab instance URL
export GITLAB_TOKEN="your-access-token" # Your GitLab personal access token

# Optional: Configure database storage location
export MCP_DATABASE_PATH="analysis_cache.db" # Path to SQLite database (default: analysis_cache.db)

# Optional: Configure transport settings
export MCP_TRANSPORT="stdio" # Transport protocol: stdio, http, sse (default: stdio)
export MCP_HOST="127.0.0.1" # Host for HTTP/SSE transport (default: 127.0.0.1)
export MCP_PORT="8000" # Port for HTTP/SSE transport (default: 8000)
export MCP_PATH="/mcp" # Path for HTTP transport (default: /mcp)

# Optional: Configure automatic cache cleanup
export MCP_AUTO_CLEANUP_ENABLED="true" # Enable automatic cleanup (default: true)
export MCP_AUTO_CLEANUP_INTERVAL_MINUTES="60" # Cleanup interval in minutes (default: 60)
export MCP_AUTO_CLEANUP_MAX_AGE_HOURS="24" # Max age before cleanup in hours (default: 24)

# Optional: Configure debug output
export MCP_DEBUG_LEVEL="0" # Debug level: 0=none, 1=basic, 2=verbose, 3=very verbose (default: 0)
```

`````

Note: Project ID is now passed as a parameter to each tool, making the server more flexible.

## Running the Server

The server supports three transport protocols:

### 1. STDIO Transport (Default)

Best for local tools and command-line scripts:

````bash
```bash
gitlab-analyzer
`````

Or explicitly specify the transport:

```bash
gitlab-analyzer --transport stdio
```

### 2. HTTP Transport

Recommended for web deployments and remote access:

````bash
```bash
gitlab-analyzer-http
````

Or using the main server with transport option:

```bash
gitlab-analyzer --transport http --host 127.0.0.1 --port 8000 --path /mcp
```

Or with environment variables:

```bash
MCP_TRANSPORT=http MCP_HOST=0.0.0.0 MCP_PORT=8080 gitlab-analyzer
```

The HTTP server will be available at: `http://127.0.0.1:8000/mcp`

### 3. SSE Transport

For compatibility with existing SSE clients:

````bash
```bash
gitlab-analyzer-sse
````

Or using the main server with transport option:

```bash
gitlab-analyzer --transport sse --host 127.0.0.1 --port 8000
```

The SSE server will be available at: `http://127.0.0.1:8000`

## Using with MCP Clients

### HTTP Transport Client Example

```python
from fastmcp.client import Client

# Connect to HTTP MCP server
async with Client("http://127.0.0.1:8000/mcp") as client:
    # List available tools
    tools = await client.list_tools()

    # Analyze a pipeline
    result = await client.call_tool("analyze_pipeline", {
        "project_id": "123",
        "pipeline_id": "456"
    })
```

### VS Code Local MCP Configuration

This project includes a local MCP configuration in `.vscode/mcp.json` for easy development:

```json
{
  "servers": {
    "gitlab-pipeline-analyzer": {
      "command": "uv",
      "args": ["run", "gitlab-analyzer"],
      "env": {
        "GITLAB_URL": "${input:gitlab_instance_url}",
        "GITLAB_TOKEN": "${input:gitlab_access_token}"
      }
    }
  },
  "inputs": [
    {
      "id": "gitlab_instance_url",
      "type": "promptString",
      "description": "GitLab Instance URL"
    },
    {
      "id": "gitlab_access_token",
      "type": "promptString",
      "description": "GitLab Personal Access Token"
    }
  ]
}
```

This configuration uses **VS Code MCP inputs** which:

- **๐Ÿ”’ More secure** - No credentials stored on disk
- **๐ŸŽฏ Interactive** - VS Code prompts for credentials when needed
- **โšก Session-based** - Credentials only exist in memory

**Alternative: `.env` file approach** for rapid development:

1. Copy the example environment file:

   ```bash
   cp .env.example .env
   ```

2. Edit `.env` with your GitLab credentials:

   ```bash
   GITLAB_URL=https://your-gitlab-instance.com
   GITLAB_TOKEN=your-personal-access-token
   ```

3. Update `.vscode/mcp.json` to remove the `env` and `inputs` sections - the server will auto-load from `.env`

Both approaches work - choose based on your security requirements and workflow preferences.

### VS Code Claude Desktop Configuration

Add the following to your VS Code Claude Desktop `claude_desktop_config.json` file:

```json
{
  "servers": {
    "gitlab-pipeline-analyzer": {
      "type": "stdio",
      "command": "uvx",
      "args": [
        "--from",
        "gitlab_pipeline_analyzer==0.8.0",
        "gitlab-analyzer",
        "--transport",
        "${input:mcp_transport}"
      ],
      "env": {
        "GITLAB_URL": "${input:gitlab_url}",
        "GITLAB_TOKEN": "${input:gitlab_token}"
      }
    },
    "local-gitlab-analyzer": {
      "type": "stdio",
      "command": "uv",
      "args": ["run", "gitlab-analyzer"],
      "cwd": "/path/to/your/mcp/project",
      "env": {
        "GITLAB_URL": "${input:gitlab_url}",
        "GITLAB_TOKEN": "${input:gitlab_token}"
      }
    },
    "acme-gitlab-analyzer": {
      "command": "uvx",
      "args": ["--from", "gitlab-pipeline-analyzer", "gitlab-analyzer"],
      "env": {
        "GITLAB_URL": "https://gitlab.acme-corp.com",
        "GITLAB_TOKEN": "your-token-here"
      }
    }
  },
  "inputs": [
    {
      "id": "mcp_transport",
      "type": "promptString",
      "description": "MCP Transport (stdio/http/sse)"
    },
    {
      "id": "gitlab_url",
      "type": "promptString",
      "description": "GitLab Instance URL"
    },
    {
      "id": "gitlab_token",
      "type": "promptString",
      "description": "GitLab Personal Access Token"
    }
  ]
}
```

#### Configuration Examples Explained:

1. **`gitlab-pipeline-analyzer`** - Uses the published package from PyPI with dynamic inputs
2. **`local-gitlab-analyzer`** - Uses local development version with dynamic inputs
3. **`acme-gitlab-analyzer`** - Uses the published package with hardcoded company-specific values

#### Dynamic vs Static Configuration:

- **Dynamic inputs** (using `${input:variable_name}`) prompt you each time
- **Static values** are hardcoded for convenience but less secure
- For security, consider using environment variables or VS Code settings

### Remote Server Setup

For production deployments or team usage, you can deploy the MCP server on a remote machine and connect to it via HTTP transport.

#### Server Deployment

1. **Deploy on Remote Server:**

```bash
# On your remote server (e.g., cloud instance)
git clone <your-mcp-repo>
cd mcp
uv sync

# Set environment variables
export GITLAB_URL="https://gitlab.your-company.com"
export GITLAB_TOKEN="your-gitlab-token"
export MCP_HOST="0.0.0.0"  # Listen on all interfaces
export MCP_PORT="8000"
export MCP_PATH="/mcp"

# Start HTTP server
uv run python -m gitlab_analyzer.servers.stdio_server --transport http --host 0.0.0.0 --port 8000
```

2. **Using Docker (Recommended for Production):**

```dockerfile
# Dockerfile
FROM python:3.12-slim

WORKDIR /app
COPY . .

RUN pip install uv && uv sync

EXPOSE 8000

ENV MCP_HOST=0.0.0.0
ENV MCP_PORT=8000
ENV MCP_PATH=/mcp

CMD ["uv", "run", "python", "server.py", "--transport", "http"]
```

```bash
# Build and run
docker build -t gitlab-mcp-server .
docker run -p 8000:8000 \
  -e GITLAB_URL="https://gitlab.your-company.com" \
  -e GITLAB_TOKEN="your-token" \
  gitlab-mcp-server
```

#### Client Configuration for Remote Server

**VS Code Claude Desktop Configuration:**

```json
{
  "servers": {
    "remote-gitlab-analyzer": {
      "type": "http",
      "url": "https://your-mcp-server.com:8000/mcp"
    },
    "local-stdio-analyzer": {
      "type": "stdio",
      "command": "uv",
      "args": ["run", "gitlab-analyzer"],
      "cwd": "/path/to/your/mcp/project",
      "env": {
        "GITLAB_URL": "${input:gitlab_url}",
        "GITLAB_TOKEN": "${input:gitlab_token}"
      }
    }
  },
  "inputs": [
    {
      "id": "gitlab_url",
      "type": "promptString",
      "description": "GitLab Instance URL (for local STDIO servers only)"
    },
    {
      "id": "gitlab_token",
      "type": "promptString",
      "description": "GitLab Personal Access Token (for local STDIO servers only)"
    }
  ]
}
```

**Important Notes:**

- **Remote HTTP servers**: Environment variables are configured on the server side during deployment
- **Local STDIO servers**: Environment variables are passed from the client via the `env` block
- **Your server reads `GITLAB_URL` and `GITLAB_TOKEN` from its environment at startup**
- **The client cannot change server-side environment variables for HTTP transport**

#### Current Limitations:

**Single GitLab Instance per Server:**

- Each HTTP server deployment can only connect to **one GitLab instance** with **one token**
- **No user-specific authorization** - all clients share the same GitLab credentials
- **No multi-tenant support** - cannot serve multiple GitLab instances from one server

#### Workarounds for Multi-GitLab Support:

**Option 1: Multiple Server Deployments**

```bash
# Server 1 - Company GitLab
export GITLAB_URL="https://gitlab.company.com"
export GITLAB_TOKEN="company-token"
uv run python -m gitlab_analyzer.servers.stdio_server --transport http --port 8001

# Server 2 - Personal GitLab
export GITLAB_URL="https://gitlab.com"
export GITLAB_TOKEN="personal-token"
uv run python -m gitlab_analyzer.servers.stdio_server --transport http --port 8002
```

**Option 2: Use STDIO Transport for User-Specific Auth**

```json
{
  "servers": {
    "company-gitlab": {
      "type": "stdio",
      "command": "uv",
      "args": ["run", "gitlab-analyzer"],
      "env": {
        "GITLAB_URL": "https://gitlab.company.com",
        "GITLAB_TOKEN": "company-token"
      }
    },
    "personal-gitlab": {
      "type": "stdio",
      "command": "uv",
      "args": ["run", "gitlab-analyzer"],
      "env": {
        "GITLAB_URL": "https://gitlab.com",
        "GITLAB_TOKEN": "personal-token"
      }
    }
  }
}
```

**Option 3: Future Enhancement - Multi-Tenant Server**
To support user-specific authorization, the server would need modifications to:

- Accept GitLab URL and token as **tool parameters** instead of environment variables
- Implement **per-request authentication** instead of singleton GitLab client
- Add **credential management** and **security validation**

#### Recommended Approach by Use Case:

**Single Team/Company:**

- โœ… **HTTP server** with company GitLab credentials
- Simple deployment, shared access

**Multiple GitLab Instances:**

- โœ… **STDIO transport** for user-specific credentials
- โœ… **Multiple HTTP servers** (one per GitLab instance)
- Each approach has trade-offs in complexity vs. performance

**Personal Use:**

- โœ… **STDIO transport** for maximum flexibility
- Environment variables can be changed per session

````

**Key Differences:**
- **HTTP servers** (`type: "http"`) don't use `env` - they get environment variables from their deployment
- **STDIO servers** (`type: "stdio"`) use `env` because the client spawns the server process locally
- **Remote HTTP servers** are already running with their own environment configuration

#### How Environment Variables Work:

**For Remote HTTP Servers:**
- Environment variables are set **on the server side** during deployment
- The client just connects to the HTTP endpoint
- No environment variables needed in client configuration

**For Local STDIO Servers:**
- Environment variables are passed **from client to server** via the `env` block
- The client spawns the server process with these variables
- Useful for dynamic configuration per client

**Example Server-Side Environment Setup:**
```bash
# On remote server
export GITLAB_URL="https://gitlab.company.com"
export GITLAB_TOKEN="server-side-token"
uv run python -m gitlab_analyzer.servers.stdio_server --transport http --host 0.0.0.0 --port 8000
````

**Example Client-Side for STDIO:**

```json
{
  "type": "stdio",
  "env": {
    "GITLAB_URL": "https://gitlab.personal.com",
    "GITLAB_TOKEN": "client-specific-token"
  }
}
```

**Python Client for Remote Server:**

```python
from fastmcp.client import Client

# Connect to remote HTTP MCP server
async with Client("https://your-mcp-server.com:8000/mcp") as client:
    # List available tools
    tools = await client.list_tools()

    # Analyze a pipeline
    result = await client.call_tool("analyze_pipeline", {
        "project_id": "123",
        "pipeline_id": "456"
    })
```

#### Security Considerations for Remote Deployment

1. **HTTPS/TLS:**

```bash
# Use reverse proxy (nginx/traefik) with SSL
# Example nginx config:
server {
    listen 443 ssl;
    server_name your-mcp-server.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location /mcp {
        proxy_pass http://localhost:8000/mcp;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
```

2. **Authentication (if needed):**

```bash
# Add API key validation in your deployment
export MCP_API_KEY="your-secret-api-key"

# Client usage with API key
curl -H "Authorization: Bearer your-secret-api-key" \
     https://your-mcp-server.com:8000/mcp
```

3. **Firewall Configuration:**

```bash
# Only allow specific IPs/networks
ufw allow from 192.168.1.0/24 to any port 8000
ufw deny 8000
```

### Configuration for Multiple Servers

```python
config = {
    "mcpServers": {
        "local-gitlab": {
            "url": "http://127.0.0.1:8000/mcp",
            "transport": "http"
        },
        "remote-gitlab": {
            "url": "https://mcp-server.your-company.com:8000/mcp",
            "transport": "http"
        }
    }
}

async with Client(config) as client:
    result = await client.call_tool("gitlab_analyze_pipeline", {
        "project_id": "123",
        "pipeline_id": "456"
    })
```

## Development

### Setup

```bash
# Install dependencies
uv sync --all-extras

# Install pre-commit hooks
uv run pre-commit install
```

### Running tests

```bash
# Run all tests
uv run pytest

# Run tests with coverage
uv run pytest --cov=gitlab_analyzer --cov-report=html

# Run security scans
uv run bandit -r src/
```

### Code quality

```bash
# Format code
uv run ruff format

# Lint code
uv run ruff check --fix

# Type checking
uv run mypy src/
```

## GitHub Actions

This project includes comprehensive CI/CD workflows:

### CI Workflow (`.github/workflows/ci.yml`)

- **Triggers**: Push to `main`/`develop`, Pull requests
- **Features**:
  - Tests across Python 3.10, 3.11, 3.12
  - Code formatting with Ruff
  - Linting with Ruff
  - Type checking with MyPy
  - Security scanning with Bandit
  - Test coverage reporting
  - Build validation

### Release Workflow (`.github/workflows/release.yml`)

- **Triggers**: GitHub releases, Manual dispatch
- **Features**:
  - Automated PyPI publishing with trusted publishing
  - Support for TestPyPI deployment
  - Build artifacts validation
  - Secure publishing without API tokens

### Security Workflow (`.github/workflows/security.yml`)

- **Triggers**: Push, Pull requests, Weekly schedule
- **Features**:
  - Bandit security scanning
  - Trivy vulnerability scanning
  - SARIF upload to GitHub Security tab
  - Automated dependency scanning

### Setting up PyPI Publishing

1. **Configure PyPI Trusted Publishing**:

   - Go to [PyPI](https://pypi.org/manage/account/publishing/) or [TestPyPI](https://test.pypi.org/manage/account/publishing/)
   - Add a new trusted publisher with:
     - PyPI project name: `gitlab-pipeline-analyzer`
     - Owner: `your-github-username`
     - Repository name: `your-repo-name`
     - Workflow name: `release.yml`
     - Environment name: `pypi` (or `testpypi`)

2. **Create GitHub Environment**:

   - Go to repository Settings โ†’ Environments
   - Create environments named `pypi` and `testpypi`
   - Configure protection rules as needed

3. **Publishing**:
   - **TestPyPI**: Use workflow dispatch in Actions tab
   - **PyPI**: Create a GitHub release to trigger automatic publishing

### Pre-commit Hooks

The project uses pre-commit hooks for code quality:

```bash
# Install hooks
uv run pre-commit install

# Run hooks manually
uv run pre-commit run --all-files
```

Hooks include:

- Trailing whitespace removal
- End-of-file fixing
- YAML/TOML validation
- Ruff formatting and linting
- MyPy type checking
- Bandit security scanning

## Usage

### Running the server

```bash
# Run with Python
python gitlab_analyzer.py

# Or with FastMCP CLI
fastmcp run gitlab_analyzer.py:mcp
```

### Available tools

The MCP server provides **14 essential tools** for GitLab CI/CD pipeline analysis:

#### ๐ŸŽฏ Core Analysis Tool

1. **failed_pipeline_analysis(project_id, pipeline_id)** - Comprehensive pipeline analysis with intelligent parsing, caching, and resource generation. **NEW in v0.8.0**: Includes MR context and Jira ticket extraction for merge request pipelines

#### ๐Ÿ” Repository Search Tools

2. **search_repository_code(project_id, search_keywords, ...)** - Search code with filtering by extension/path/filename
3. **search_repository_commits(project_id, search_keywords, ...)** - Search commit messages with branch filtering

#### ๏ฟฝ Cache Management Tools

4. **cache_stats()** - Get cache statistics and storage information
5. **cache_health()** - Check cache system health and performance
6. **clear_cache(cache_type, project_id, max_age_hours)** - Clear cached data with flexible options

#### ๐Ÿ—‘๏ธ Specialized Cache Cleanup Tools

7. **clear_pipeline_cache(project_id, pipeline_id)** - Clear all cached data for a specific pipeline
8. **clear_job_cache(project_id, job_id)** - Clear all cached data for a specific job

#### ๐Ÿ”— Resource Access Tool

9. **get_mcp_resource(resource_uri)** - Access data from MCP resource URIs without re-running analysis

#### ๐Ÿงน Additional Tools

10. **parse_trace_for_errors(trace_content)** - **NEW in v0.8.0**: Parse CI/CD trace content and extract errors without database storage

### Resource-Based Architecture

The error analysis tools support advanced filtering to reduce noise in large traceback responses:

#### Parameters

- **`include_traceback`** (bool, default: `True`): Include/exclude all traceback information
- **`exclude_paths`** (list[str], optional): Filter out specific path patterns from traceback

#### Default Filtering Behavior

When `exclude_paths` is not specified, the tools automatically apply **DEFAULT_EXCLUDE_PATHS** to filter out common system and dependency paths:

```python
DEFAULT_EXCLUDE_PATHS = [
    ".venv",           # Virtual environment packages
    "site-packages",   # Python package installations
    ".local",          # User-local Python installations
    "/builds/",        # CI/CD build directories
    "/root/.local",    # Root user local packages
    "/usr/lib/python", # System Python libraries
    "/opt/python",     # Optional Python installations
    "/__pycache__/",   # Python bytecode cache
    ".cache",          # Various cache directories
    "/tmp/",           # Temporary files
]
```

#### Usage Examples

```python
# Use default filtering (recommended for most cases)
await client.call_tool("get_file_errors", {
    "project_id": "123",
    "job_id": 76474190,
    "file_path": "src/my_module.py"
})

# Disable traceback completely for clean error summaries
await client.call_tool("get_file_errors", {
    "project_id": "123",
    "job_id": 76474190,
    "file_path": "src/my_module.py",
    "include_traceback": False
})

# Custom path filtering
await client.call_tool("get_file_errors", {
    "project_id": "123",
    "job_id": 76474190,
    "file_path": "src/my_module.py",
    "exclude_paths": [".venv", "site-packages", "/builds/"]
})

# Get complete traceback (no filtering)
await client.call_tool("get_file_errors", {
    "project_id": "123",
    "job_id": 76474190,
    "file_path": "src/my_module.py",
    "exclude_paths": []  # Empty list = no filtering
})
```

#### Benefits

- **Reduced Response Size**: Filter out irrelevant system paths to focus on application code
- **Faster Analysis**: Smaller responses mean faster processing and analysis
- **Cleaner Debugging**: Focus on your code without noise from dependencies and system libraries
- **Flexible Control**: Choose between default filtering, custom patterns, or complete traceback

## Usage Examples

### Version 0.8.0 New Features

#### ๐Ÿš€ Merge Request Pipeline Analysis with Code Review Integration

```python
import asyncio
from fastmcp import Client

async def analyze_mr_pipeline_with_reviews():
    """Analyze a merge request pipeline with v0.8.0 features: MR context and Jira tickets"""
    client = Client("gitlab_analyzer.py")
    async with client:
        # Analyze failed MR pipeline - now includes MR context and Jira tickets
        result = await client.call_tool("failed_pipeline_analysis", {
            "project_id": "83",
            "pipeline_id": 1594344
        })

        # Check if this was a merge request pipeline
        if result.get("pipeline_type") == "merge_request":
            print("๐Ÿ”€ Merge Request Pipeline:")
            print(f"   Title: {result['merge_request']['title']}")
            print(f"   Source โ†’ Target: {result['source_branch']} โ†’ {result['target_branch']}")

            # Show Jira tickets extracted from MR - NEW in v0.8.0!
            jira_tickets = result.get("jira_tickets", [])
            if jira_tickets:
                print(f"๐ŸŽซ Jira Tickets: {', '.join(jira_tickets)}")
        else:
            print("๐ŸŒฟ Branch Pipeline:")
            print(f"   Branch: {result['source_branch']}")
            print("   (No MR data included for branch pipelines)")

        print(f"๐Ÿ“Š Status: {result.get('status')}")

asyncio.run(analyze_mr_pipeline_with_reviews())
```

#### ๐Ÿ” Code Review Context for Intelligent Fixes

The enhanced pipeline analysis now provides crucial code review context that can be used by AI agents to understand:

- **Review Feedback**: What issues reviewers identified before the pipeline failed
- **Unresolved Discussions**: Outstanding concerns that may be related to the failure
- **Approval Status**: Whether the code has reviewer approval despite CI failures
- **Code Quality Concerns**: Specific feedback about code structure, performance, or maintainability

This context enables more intelligent automated fixes by understanding both the technical failure and the human review feedback.

### Version 0.8.2 New Features

#### ๐Ÿ“ Code Review Integration

Building on the v0.8.0 MR context, v0.8.2 adds comprehensive code review integration to provide AI agents with human review feedback alongside technical failure data.

#### ๐Ÿ“Š Example v0.8.0 Pipeline Resource

```json
{
  "pipeline_type": "merge_request",
  "merge_request": {
    "iid": 123,
    "title": "[PROJ-456] Fix user authentication bug",
    "description": "Resolves PROJ-456 by updating token validation",
    "source_branch": "feature/fix-auth",
    "target_branch": "main"
  },
  "jira_tickets": ["PROJ-456"]
  // ... other pipeline data
}
```

#### ๐ŸŽฏ Smart MR Data Filtering

The analyzer now intelligently filters data based on pipeline type:

```python
# For Merge Request pipelines (refs/merge-requests/123/head)
{
    "pipeline_type": "merge_request",
    "merge_request": {
        "iid": 123,
        "title": "[PROJ-456] Fix user authentication bug",
        "description": "Resolves PROJ-456 by updating token validation",
        "source_branch": "feature/fix-auth",
        "target_branch": "main"
    },
    "jira_tickets": ["PROJ-456"],
    # ... other pipeline data
}

# For Branch pipelines (refs/heads/main)
{
    "pipeline_type": "branch",
    "source_branch": "main",
    # No merge_request or jira_tickets fields included
    # ... other pipeline data
}
```

#### ๐Ÿ” Jira Ticket Extraction

```python
from gitlab_analyzer.utils.jira_utils import extract_jira_tickets

# Supports multiple formats
text = """
[PROJ-123] Fix authentication bug
Resolves MMGPP-456 and #TEAM-789
Also fixes (CORE-101) issue
"""

tickets = extract_jira_tickets(text)
# Returns: ["PROJ-123", "MMGPP-456", "TEAM-789", "CORE-101"]
```

#### ๐Ÿ“ Code Review Integration (v0.8.2+)

GitLab MCP Analyzer now automatically includes human review feedback for Merge Request pipelines, providing AI agents with crucial context about code quality concerns:

```python
# Analysis of MR pipeline automatically includes review data
result = await client.call_tool("failed_pipeline_analysis", {
    "project_id": "83",
    "pipeline_id": 1594344
})

# Review data is included in pipeline analysis
review_data = result.get("review_summary", {})
print(f"Approval Status: {review_data.get('approval_status', 'unknown')}")
print(f"Unresolved Discussions: {review_data.get('unresolved_discussions_count', 0)}")
print(f"Review Comments: {review_data.get('review_comments_count', 0)}")

# Access detailed feedback
for discussion in review_data.get("discussions", []):
    if not discussion.get("resolved", True):
        print(f"๐Ÿ” Unresolved: {discussion.get('body', 'No content')}")
        for note in discussion.get("notes", []):
            if note.get("type") == "suggestion":
                print(f"๐Ÿ’ก Suggestion: {note.get('body', 'No content')}")
```

**Review Integration Features:**

- **Approval Status**: Tracks MR approval state (approved, unapproved, requires_approval)
- **Discussion Context**: Captures all MR discussions including unresolved items
- **Code Suggestions**: Includes inline code suggestions from reviewers
- **Review Notes**: Aggregates all review comments and feedback
- **Quality Concerns**: Highlights code quality issues raised by humans

This enables AI agents to understand not just what failed technically, but also what human reviewers have identified as concerns, leading to more contextually appropriate automated fixes.

## Example

```python
import asyncio
from fastmcp import Client

async def analyze_pipeline():
    """Example: Analyze a failed pipeline with v0.8.2 features including code review"""
    client = Client("gitlab_analyzer.py")
    async with client:
        # Try to get existing pipeline data first (recommended v0.8.0+ workflow)
        try:
            result = await client.call_tool("get_mcp_resource", {
                "resource_uri": "gl://pipeline/83/1594344"
            })
            print("โœ… Found cached pipeline data")
        except Exception:
            # If not analyzed yet, run full analysis
            result = await client.call_tool("failed_pipeline_analysis", {
                "project_id": "83",
                "pipeline_id": 1594344
            })
            print("๐Ÿ”„ Performed new pipeline analysis")

        # Check pipeline type and show appropriate information
        if result.get("pipeline_type") == "merge_request":
            mr_info = result.get("merge_request", {})
            print(f"๐Ÿ”€ MR: {mr_info.get('title', 'Unknown')}")

            # Show Jira ticket context
            jira_tickets = result.get("jira_tickets", [])
            if jira_tickets:
                print(f"๐ŸŽซ Jira: {', '.join(jira_tickets)}")

            # Show code review context (v0.8.2+)
            review_data = result.get("review_summary", {})
            if review_data:
                approval = review_data.get("approval_status", "unknown")
                unresolved = review_data.get("unresolved_discussions_count", 0)
                comments = review_data.get("review_comments_count", 0)

                print(f"๐Ÿ“ Review Status: {approval}")
                if unresolved > 0:
                    print(f"๐Ÿ” Unresolved Issues: {unresolved}")
                if comments > 0:
                    print(f"๐Ÿ’ฌ Review Comments: {comments}")

                # Show human feedback for AI context
                discussions = review_data.get("discussions", [])
                unresolved_feedback = [d for d in discussions if not d.get("resolved", True)]
                if unresolved_feedback:
                    print("\n๐Ÿšจ Human Review Concerns:")
                    for discussion in unresolved_feedback[:3]:  # Show first 3
                        body = discussion.get("body", "")[:100]
                        print(f"   โ€ข {body}...")
        else:
            print(f"๐ŸŒฟ Branch: {result.get('source_branch', 'Unknown')}")

        print(f"๐Ÿ“Š Status: {result.get('status')}")

asyncio.run(analyze_pipeline())
```

## Environment Setup

Create a `.env` file with your GitLab configuration:

```env
GITLAB_URL=https://gitlab.com
GITLAB_TOKEN=your-personal-access-token
```

## Development

```bash
# Install development dependencies
uv sync

# Run tests
uv run pytest

# Run linting and type checking
uv run tox -e lint,type

# Run all quality checks
uv run tox
```

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Author

**Siarhei Skuratovich**

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Run the test suite
5. Submit a pull request

For maintainers preparing releases, see [DEPLOYMENT.md](DEPLOYMENT.md) for detailed deployment preparation steps.

---

**Note**: This MCP server is designed to work with GitLab CI/CD pipelines and requires appropriate API access tokens.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "gitlab-pipeline-analyzer",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "gitlab, ci/cd, pipeline, analysis, mcp, fastmcp",
    "author": null,
    "author_email": "Siarhei Skuratovich <sOraCool@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/70/df/a029f6fef13517d4156a2dbd0a8848520f0187f8717e5c75d9979e11acd9/gitlab_pipeline_analyzer-0.10.0.tar.gz",
    "platform": null,
    "description": "# GitLab Pipeline Analyzer MCP Server\n\nA comprehensive FastMCP server that analyzes GitLab CI/CD pipeline failures with intelligent caching, structured resources, and guided prompts for AI agents.\n\n## \u2728 Key Features\n\n### \ud83d\udd0d **Comprehensive Analysis**\n\n- Deep pipeline failure analysis with error extraction and merge request context\n- Intelligent error categorization and pattern detection\n- Support for pytest, build, and general CI/CD failures\n- **\u2728 NEW in v0.8.0**: Complete merge request information integration with Jira ticket extraction\n- **\ud83c\udfaf NEW in v0.8.0**: Smart filtering of MR data based on pipeline type (only shows MR data for actual MR pipelines)\n- **\ud83d\udcdd NEW in v0.8.2**: Code review integration - automatically includes discussions, notes, approval status, and unresolved feedback from merge requests for AI-powered context-aware fixes\n\n### \ud83d\udcbe **Intelligent Caching**\n\n- SQLite-based caching for faster analysis\n- Automatic cache invalidation and cleanup\n- Significant performance improvements (90% reduction in API calls)\n\n### \ud83d\udce6 **MCP Resources & Smart Data Access**\n\n- **Resource-First Architecture**: Always try `get_mcp_resource` before running analysis tools\n- **Efficient Caching**: Resources serve cached data instantly without re-analysis\n- **Smart URIs**: Intuitive resource patterns like `gl://pipeline/{project_id}/{pipeline_id}`\n- **Navigation Links**: Related resources automatically suggested in responses\n- **Pipeline Resources**: Complete pipeline overview with conditional MR data\n- **Job Resources**: Individual job analysis with error extraction\n- **File Resources**: File-specific error details with trace context\n- **Error Resources**: Detailed error analysis with fix guidance\n\n### \ud83c\udfaf **Intelligent Prompts & Workflows**\n\n- **13+ Specialized Prompts** across 5 categories for comprehensive CI/CD guidance\n- **Advanced Workflows**: `investigation-wizard`, `pipeline-comparison`, `fix-strategy-planner`\n- **Performance Optimization**: `performance-investigation`, `ci-cd-optimization`, `resource-efficiency`\n- **Educational & Learning**: `learning-path`, `knowledge-sharing`, `mentoring-guide`\n- **Role-based Customization**: Adapts to user expertise (Beginner/Intermediate/Expert/SRE/Manager)\n- **Progressive Complexity**: Multi-step workflows with context continuity\n\n### \ud83d\ude80 **Multiple Transport Protocols**\n\n- STDIO (default) - For local tools and integrations\n- HTTP - For web deployments and remote access\n- SSE - For real-time streaming connections\n\n## Architecture Overview\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502   MCP Client    \u2502    \u2502   Cache Layer    \u2502    \u2502  GitLab API     \u2502\n\u2502    (Agents)     \u2502\u25c4\u2500\u2500\u25ba\u2502   (SQLite DB)    \u2502\u25c4\u2500\u2500\u25ba\u2502   (External)    \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n         \u2502                       \u2502                       \u2502\n         \u25bc                       \u25bc                       \u25bc\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502                    MCP Server                                   \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502   Resources     \u2502     Tools       \u2502       Prompts              \u2502\n\u2502                 \u2502                 \u2502                             \u2502\n\u2502 \u2022 Pipeline      \u2502 \u2022 Complex       \u2502 \u2022 Advanced Workflows       \u2502\n\u2502 \u2022 Job           \u2502   Analysis      \u2502 \u2022 Performance Optimization \u2502\n\u2502 \u2022 Analysis      \u2502 \u2022 Repository    \u2502 \u2022 Educational & Learning   \u2502\n\u2502 \u2022 Error         \u2502   Search        \u2502 \u2022 Investigation & Debug    \u2502\n\u2502                 \u2502 \u2022 Pagination    \u2502 \u2022 Role-based Guidance      \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## Installation\n\n```bash\n# Install dependencies\nuv pip install -e .\n\n# Or with pip\npip install -e .\n```\n\n## Configuration\n\nSet the following environment variables:\n\n```bash\n# Required: GitLab connection settings\nexport GITLAB_URL=\"https://gitlab.com\" # Your GitLab instance URL\nexport GITLAB_TOKEN=\"your-access-token\" # Your GitLab personal access token\n\n# Optional: Configure database storage location\nexport MCP_DATABASE_PATH=\"analysis_cache.db\" # Path to SQLite database (default: analysis_cache.db)\n\n# Optional: Configure transport settings\nexport MCP_TRANSPORT=\"stdio\" # Transport protocol: stdio, http, sse (default: stdio)\nexport MCP_HOST=\"127.0.0.1\" # Host for HTTP/SSE transport (default: 127.0.0.1)\nexport MCP_PORT=\"8000\" # Port for HTTP/SSE transport (default: 8000)\nexport MCP_PATH=\"/mcp\" # Path for HTTP transport (default: /mcp)\n\n# Optional: Configure automatic cache cleanup\nexport MCP_AUTO_CLEANUP_ENABLED=\"true\" # Enable automatic cleanup (default: true)\nexport MCP_AUTO_CLEANUP_INTERVAL_MINUTES=\"60\" # Cleanup interval in minutes (default: 60)\nexport MCP_AUTO_CLEANUP_MAX_AGE_HOURS=\"24\" # Max age before cleanup in hours (default: 24)\n\n# Optional: Configure debug output\nexport MCP_DEBUG_LEVEL=\"0\" # Debug level: 0=none, 1=basic, 2=verbose, 3=very verbose (default: 0)\n```\n\n`````\n\nNote: Project ID is now passed as a parameter to each tool, making the server more flexible.\n\n## Running the Server\n\nThe server supports three transport protocols:\n\n### 1. STDIO Transport (Default)\n\nBest for local tools and command-line scripts:\n\n````bash\n```bash\ngitlab-analyzer\n`````\n\nOr explicitly specify the transport:\n\n```bash\ngitlab-analyzer --transport stdio\n```\n\n### 2. HTTP Transport\n\nRecommended for web deployments and remote access:\n\n````bash\n```bash\ngitlab-analyzer-http\n````\n\nOr using the main server with transport option:\n\n```bash\ngitlab-analyzer --transport http --host 127.0.0.1 --port 8000 --path /mcp\n```\n\nOr with environment variables:\n\n```bash\nMCP_TRANSPORT=http MCP_HOST=0.0.0.0 MCP_PORT=8080 gitlab-analyzer\n```\n\nThe HTTP server will be available at: `http://127.0.0.1:8000/mcp`\n\n### 3. SSE Transport\n\nFor compatibility with existing SSE clients:\n\n````bash\n```bash\ngitlab-analyzer-sse\n````\n\nOr using the main server with transport option:\n\n```bash\ngitlab-analyzer --transport sse --host 127.0.0.1 --port 8000\n```\n\nThe SSE server will be available at: `http://127.0.0.1:8000`\n\n## Using with MCP Clients\n\n### HTTP Transport Client Example\n\n```python\nfrom fastmcp.client import Client\n\n# Connect to HTTP MCP server\nasync with Client(\"http://127.0.0.1:8000/mcp\") as client:\n    # List available tools\n    tools = await client.list_tools()\n\n    # Analyze a pipeline\n    result = await client.call_tool(\"analyze_pipeline\", {\n        \"project_id\": \"123\",\n        \"pipeline_id\": \"456\"\n    })\n```\n\n### VS Code Local MCP Configuration\n\nThis project includes a local MCP configuration in `.vscode/mcp.json` for easy development:\n\n```json\n{\n  \"servers\": {\n    \"gitlab-pipeline-analyzer\": {\n      \"command\": \"uv\",\n      \"args\": [\"run\", \"gitlab-analyzer\"],\n      \"env\": {\n        \"GITLAB_URL\": \"${input:gitlab_instance_url}\",\n        \"GITLAB_TOKEN\": \"${input:gitlab_access_token}\"\n      }\n    }\n  },\n  \"inputs\": [\n    {\n      \"id\": \"gitlab_instance_url\",\n      \"type\": \"promptString\",\n      \"description\": \"GitLab Instance URL\"\n    },\n    {\n      \"id\": \"gitlab_access_token\",\n      \"type\": \"promptString\",\n      \"description\": \"GitLab Personal Access Token\"\n    }\n  ]\n}\n```\n\nThis configuration uses **VS Code MCP inputs** which:\n\n- **\ud83d\udd12 More secure** - No credentials stored on disk\n- **\ud83c\udfaf Interactive** - VS Code prompts for credentials when needed\n- **\u26a1 Session-based** - Credentials only exist in memory\n\n**Alternative: `.env` file approach** for rapid development:\n\n1. Copy the example environment file:\n\n   ```bash\n   cp .env.example .env\n   ```\n\n2. Edit `.env` with your GitLab credentials:\n\n   ```bash\n   GITLAB_URL=https://your-gitlab-instance.com\n   GITLAB_TOKEN=your-personal-access-token\n   ```\n\n3. Update `.vscode/mcp.json` to remove the `env` and `inputs` sections - the server will auto-load from `.env`\n\nBoth approaches work - choose based on your security requirements and workflow preferences.\n\n### VS Code Claude Desktop Configuration\n\nAdd the following to your VS Code Claude Desktop `claude_desktop_config.json` file:\n\n```json\n{\n  \"servers\": {\n    \"gitlab-pipeline-analyzer\": {\n      \"type\": \"stdio\",\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\",\n        \"gitlab_pipeline_analyzer==0.8.0\",\n        \"gitlab-analyzer\",\n        \"--transport\",\n        \"${input:mcp_transport}\"\n      ],\n      \"env\": {\n        \"GITLAB_URL\": \"${input:gitlab_url}\",\n        \"GITLAB_TOKEN\": \"${input:gitlab_token}\"\n      }\n    },\n    \"local-gitlab-analyzer\": {\n      \"type\": \"stdio\",\n      \"command\": \"uv\",\n      \"args\": [\"run\", \"gitlab-analyzer\"],\n      \"cwd\": \"/path/to/your/mcp/project\",\n      \"env\": {\n        \"GITLAB_URL\": \"${input:gitlab_url}\",\n        \"GITLAB_TOKEN\": \"${input:gitlab_token}\"\n      }\n    },\n    \"acme-gitlab-analyzer\": {\n      \"command\": \"uvx\",\n      \"args\": [\"--from\", \"gitlab-pipeline-analyzer\", \"gitlab-analyzer\"],\n      \"env\": {\n        \"GITLAB_URL\": \"https://gitlab.acme-corp.com\",\n        \"GITLAB_TOKEN\": \"your-token-here\"\n      }\n    }\n  },\n  \"inputs\": [\n    {\n      \"id\": \"mcp_transport\",\n      \"type\": \"promptString\",\n      \"description\": \"MCP Transport (stdio/http/sse)\"\n    },\n    {\n      \"id\": \"gitlab_url\",\n      \"type\": \"promptString\",\n      \"description\": \"GitLab Instance URL\"\n    },\n    {\n      \"id\": \"gitlab_token\",\n      \"type\": \"promptString\",\n      \"description\": \"GitLab Personal Access Token\"\n    }\n  ]\n}\n```\n\n#### Configuration Examples Explained:\n\n1. **`gitlab-pipeline-analyzer`** - Uses the published package from PyPI with dynamic inputs\n2. **`local-gitlab-analyzer`** - Uses local development version with dynamic inputs\n3. **`acme-gitlab-analyzer`** - Uses the published package with hardcoded company-specific values\n\n#### Dynamic vs Static Configuration:\n\n- **Dynamic inputs** (using `${input:variable_name}`) prompt you each time\n- **Static values** are hardcoded for convenience but less secure\n- For security, consider using environment variables or VS Code settings\n\n### Remote Server Setup\n\nFor production deployments or team usage, you can deploy the MCP server on a remote machine and connect to it via HTTP transport.\n\n#### Server Deployment\n\n1. **Deploy on Remote Server:**\n\n```bash\n# On your remote server (e.g., cloud instance)\ngit clone <your-mcp-repo>\ncd mcp\nuv sync\n\n# Set environment variables\nexport GITLAB_URL=\"https://gitlab.your-company.com\"\nexport GITLAB_TOKEN=\"your-gitlab-token\"\nexport MCP_HOST=\"0.0.0.0\"  # Listen on all interfaces\nexport MCP_PORT=\"8000\"\nexport MCP_PATH=\"/mcp\"\n\n# Start HTTP server\nuv run python -m gitlab_analyzer.servers.stdio_server --transport http --host 0.0.0.0 --port 8000\n```\n\n2. **Using Docker (Recommended for Production):**\n\n```dockerfile\n# Dockerfile\nFROM python:3.12-slim\n\nWORKDIR /app\nCOPY . .\n\nRUN pip install uv && uv sync\n\nEXPOSE 8000\n\nENV MCP_HOST=0.0.0.0\nENV MCP_PORT=8000\nENV MCP_PATH=/mcp\n\nCMD [\"uv\", \"run\", \"python\", \"server.py\", \"--transport\", \"http\"]\n```\n\n```bash\n# Build and run\ndocker build -t gitlab-mcp-server .\ndocker run -p 8000:8000 \\\n  -e GITLAB_URL=\"https://gitlab.your-company.com\" \\\n  -e GITLAB_TOKEN=\"your-token\" \\\n  gitlab-mcp-server\n```\n\n#### Client Configuration for Remote Server\n\n**VS Code Claude Desktop Configuration:**\n\n```json\n{\n  \"servers\": {\n    \"remote-gitlab-analyzer\": {\n      \"type\": \"http\",\n      \"url\": \"https://your-mcp-server.com:8000/mcp\"\n    },\n    \"local-stdio-analyzer\": {\n      \"type\": \"stdio\",\n      \"command\": \"uv\",\n      \"args\": [\"run\", \"gitlab-analyzer\"],\n      \"cwd\": \"/path/to/your/mcp/project\",\n      \"env\": {\n        \"GITLAB_URL\": \"${input:gitlab_url}\",\n        \"GITLAB_TOKEN\": \"${input:gitlab_token}\"\n      }\n    }\n  },\n  \"inputs\": [\n    {\n      \"id\": \"gitlab_url\",\n      \"type\": \"promptString\",\n      \"description\": \"GitLab Instance URL (for local STDIO servers only)\"\n    },\n    {\n      \"id\": \"gitlab_token\",\n      \"type\": \"promptString\",\n      \"description\": \"GitLab Personal Access Token (for local STDIO servers only)\"\n    }\n  ]\n}\n```\n\n**Important Notes:**\n\n- **Remote HTTP servers**: Environment variables are configured on the server side during deployment\n- **Local STDIO servers**: Environment variables are passed from the client via the `env` block\n- **Your server reads `GITLAB_URL` and `GITLAB_TOKEN` from its environment at startup**\n- **The client cannot change server-side environment variables for HTTP transport**\n\n#### Current Limitations:\n\n**Single GitLab Instance per Server:**\n\n- Each HTTP server deployment can only connect to **one GitLab instance** with **one token**\n- **No user-specific authorization** - all clients share the same GitLab credentials\n- **No multi-tenant support** - cannot serve multiple GitLab instances from one server\n\n#### Workarounds for Multi-GitLab Support:\n\n**Option 1: Multiple Server Deployments**\n\n```bash\n# Server 1 - Company GitLab\nexport GITLAB_URL=\"https://gitlab.company.com\"\nexport GITLAB_TOKEN=\"company-token\"\nuv run python -m gitlab_analyzer.servers.stdio_server --transport http --port 8001\n\n# Server 2 - Personal GitLab\nexport GITLAB_URL=\"https://gitlab.com\"\nexport GITLAB_TOKEN=\"personal-token\"\nuv run python -m gitlab_analyzer.servers.stdio_server --transport http --port 8002\n```\n\n**Option 2: Use STDIO Transport for User-Specific Auth**\n\n```json\n{\n  \"servers\": {\n    \"company-gitlab\": {\n      \"type\": \"stdio\",\n      \"command\": \"uv\",\n      \"args\": [\"run\", \"gitlab-analyzer\"],\n      \"env\": {\n        \"GITLAB_URL\": \"https://gitlab.company.com\",\n        \"GITLAB_TOKEN\": \"company-token\"\n      }\n    },\n    \"personal-gitlab\": {\n      \"type\": \"stdio\",\n      \"command\": \"uv\",\n      \"args\": [\"run\", \"gitlab-analyzer\"],\n      \"env\": {\n        \"GITLAB_URL\": \"https://gitlab.com\",\n        \"GITLAB_TOKEN\": \"personal-token\"\n      }\n    }\n  }\n}\n```\n\n**Option 3: Future Enhancement - Multi-Tenant Server**\nTo support user-specific authorization, the server would need modifications to:\n\n- Accept GitLab URL and token as **tool parameters** instead of environment variables\n- Implement **per-request authentication** instead of singleton GitLab client\n- Add **credential management** and **security validation**\n\n#### Recommended Approach by Use Case:\n\n**Single Team/Company:**\n\n- \u2705 **HTTP server** with company GitLab credentials\n- Simple deployment, shared access\n\n**Multiple GitLab Instances:**\n\n- \u2705 **STDIO transport** for user-specific credentials\n- \u2705 **Multiple HTTP servers** (one per GitLab instance)\n- Each approach has trade-offs in complexity vs. performance\n\n**Personal Use:**\n\n- \u2705 **STDIO transport** for maximum flexibility\n- Environment variables can be changed per session\n\n````\n\n**Key Differences:**\n- **HTTP servers** (`type: \"http\"`) don't use `env` - they get environment variables from their deployment\n- **STDIO servers** (`type: \"stdio\"`) use `env` because the client spawns the server process locally\n- **Remote HTTP servers** are already running with their own environment configuration\n\n#### How Environment Variables Work:\n\n**For Remote HTTP Servers:**\n- Environment variables are set **on the server side** during deployment\n- The client just connects to the HTTP endpoint\n- No environment variables needed in client configuration\n\n**For Local STDIO Servers:**\n- Environment variables are passed **from client to server** via the `env` block\n- The client spawns the server process with these variables\n- Useful for dynamic configuration per client\n\n**Example Server-Side Environment Setup:**\n```bash\n# On remote server\nexport GITLAB_URL=\"https://gitlab.company.com\"\nexport GITLAB_TOKEN=\"server-side-token\"\nuv run python -m gitlab_analyzer.servers.stdio_server --transport http --host 0.0.0.0 --port 8000\n````\n\n**Example Client-Side for STDIO:**\n\n```json\n{\n  \"type\": \"stdio\",\n  \"env\": {\n    \"GITLAB_URL\": \"https://gitlab.personal.com\",\n    \"GITLAB_TOKEN\": \"client-specific-token\"\n  }\n}\n```\n\n**Python Client for Remote Server:**\n\n```python\nfrom fastmcp.client import Client\n\n# Connect to remote HTTP MCP server\nasync with Client(\"https://your-mcp-server.com:8000/mcp\") as client:\n    # List available tools\n    tools = await client.list_tools()\n\n    # Analyze a pipeline\n    result = await client.call_tool(\"analyze_pipeline\", {\n        \"project_id\": \"123\",\n        \"pipeline_id\": \"456\"\n    })\n```\n\n#### Security Considerations for Remote Deployment\n\n1. **HTTPS/TLS:**\n\n```bash\n# Use reverse proxy (nginx/traefik) with SSL\n# Example nginx config:\nserver {\n    listen 443 ssl;\n    server_name your-mcp-server.com;\n\n    ssl_certificate /path/to/cert.pem;\n    ssl_certificate_key /path/to/key.pem;\n\n    location /mcp {\n        proxy_pass http://localhost:8000/mcp;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n    }\n}\n```\n\n2. **Authentication (if needed):**\n\n```bash\n# Add API key validation in your deployment\nexport MCP_API_KEY=\"your-secret-api-key\"\n\n# Client usage with API key\ncurl -H \"Authorization: Bearer your-secret-api-key\" \\\n     https://your-mcp-server.com:8000/mcp\n```\n\n3. **Firewall Configuration:**\n\n```bash\n# Only allow specific IPs/networks\nufw allow from 192.168.1.0/24 to any port 8000\nufw deny 8000\n```\n\n### Configuration for Multiple Servers\n\n```python\nconfig = {\n    \"mcpServers\": {\n        \"local-gitlab\": {\n            \"url\": \"http://127.0.0.1:8000/mcp\",\n            \"transport\": \"http\"\n        },\n        \"remote-gitlab\": {\n            \"url\": \"https://mcp-server.your-company.com:8000/mcp\",\n            \"transport\": \"http\"\n        }\n    }\n}\n\nasync with Client(config) as client:\n    result = await client.call_tool(\"gitlab_analyze_pipeline\", {\n        \"project_id\": \"123\",\n        \"pipeline_id\": \"456\"\n    })\n```\n\n## Development\n\n### Setup\n\n```bash\n# Install dependencies\nuv sync --all-extras\n\n# Install pre-commit hooks\nuv run pre-commit install\n```\n\n### Running tests\n\n```bash\n# Run all tests\nuv run pytest\n\n# Run tests with coverage\nuv run pytest --cov=gitlab_analyzer --cov-report=html\n\n# Run security scans\nuv run bandit -r src/\n```\n\n### Code quality\n\n```bash\n# Format code\nuv run ruff format\n\n# Lint code\nuv run ruff check --fix\n\n# Type checking\nuv run mypy src/\n```\n\n## GitHub Actions\n\nThis project includes comprehensive CI/CD workflows:\n\n### CI Workflow (`.github/workflows/ci.yml`)\n\n- **Triggers**: Push to `main`/`develop`, Pull requests\n- **Features**:\n  - Tests across Python 3.10, 3.11, 3.12\n  - Code formatting with Ruff\n  - Linting with Ruff\n  - Type checking with MyPy\n  - Security scanning with Bandit\n  - Test coverage reporting\n  - Build validation\n\n### Release Workflow (`.github/workflows/release.yml`)\n\n- **Triggers**: GitHub releases, Manual dispatch\n- **Features**:\n  - Automated PyPI publishing with trusted publishing\n  - Support for TestPyPI deployment\n  - Build artifacts validation\n  - Secure publishing without API tokens\n\n### Security Workflow (`.github/workflows/security.yml`)\n\n- **Triggers**: Push, Pull requests, Weekly schedule\n- **Features**:\n  - Bandit security scanning\n  - Trivy vulnerability scanning\n  - SARIF upload to GitHub Security tab\n  - Automated dependency scanning\n\n### Setting up PyPI Publishing\n\n1. **Configure PyPI Trusted Publishing**:\n\n   - Go to [PyPI](https://pypi.org/manage/account/publishing/) or [TestPyPI](https://test.pypi.org/manage/account/publishing/)\n   - Add a new trusted publisher with:\n     - PyPI project name: `gitlab-pipeline-analyzer`\n     - Owner: `your-github-username`\n     - Repository name: `your-repo-name`\n     - Workflow name: `release.yml`\n     - Environment name: `pypi` (or `testpypi`)\n\n2. **Create GitHub Environment**:\n\n   - Go to repository Settings \u2192 Environments\n   - Create environments named `pypi` and `testpypi`\n   - Configure protection rules as needed\n\n3. **Publishing**:\n   - **TestPyPI**: Use workflow dispatch in Actions tab\n   - **PyPI**: Create a GitHub release to trigger automatic publishing\n\n### Pre-commit Hooks\n\nThe project uses pre-commit hooks for code quality:\n\n```bash\n# Install hooks\nuv run pre-commit install\n\n# Run hooks manually\nuv run pre-commit run --all-files\n```\n\nHooks include:\n\n- Trailing whitespace removal\n- End-of-file fixing\n- YAML/TOML validation\n- Ruff formatting and linting\n- MyPy type checking\n- Bandit security scanning\n\n## Usage\n\n### Running the server\n\n```bash\n# Run with Python\npython gitlab_analyzer.py\n\n# Or with FastMCP CLI\nfastmcp run gitlab_analyzer.py:mcp\n```\n\n### Available tools\n\nThe MCP server provides **14 essential tools** for GitLab CI/CD pipeline analysis:\n\n#### \ud83c\udfaf Core Analysis Tool\n\n1. **failed_pipeline_analysis(project_id, pipeline_id)** - Comprehensive pipeline analysis with intelligent parsing, caching, and resource generation. **NEW in v0.8.0**: Includes MR context and Jira ticket extraction for merge request pipelines\n\n#### \ud83d\udd0d Repository Search Tools\n\n2. **search_repository_code(project_id, search_keywords, ...)** - Search code with filtering by extension/path/filename\n3. **search_repository_commits(project_id, search_keywords, ...)** - Search commit messages with branch filtering\n\n#### \ufffd Cache Management Tools\n\n4. **cache_stats()** - Get cache statistics and storage information\n5. **cache_health()** - Check cache system health and performance\n6. **clear_cache(cache_type, project_id, max_age_hours)** - Clear cached data with flexible options\n\n#### \ud83d\uddd1\ufe0f Specialized Cache Cleanup Tools\n\n7. **clear_pipeline_cache(project_id, pipeline_id)** - Clear all cached data for a specific pipeline\n8. **clear_job_cache(project_id, job_id)** - Clear all cached data for a specific job\n\n#### \ud83d\udd17 Resource Access Tool\n\n9. **get_mcp_resource(resource_uri)** - Access data from MCP resource URIs without re-running analysis\n\n#### \ud83e\uddf9 Additional Tools\n\n10. **parse_trace_for_errors(trace_content)** - **NEW in v0.8.0**: Parse CI/CD trace content and extract errors without database storage\n\n### Resource-Based Architecture\n\nThe error analysis tools support advanced filtering to reduce noise in large traceback responses:\n\n#### Parameters\n\n- **`include_traceback`** (bool, default: `True`): Include/exclude all traceback information\n- **`exclude_paths`** (list[str], optional): Filter out specific path patterns from traceback\n\n#### Default Filtering Behavior\n\nWhen `exclude_paths` is not specified, the tools automatically apply **DEFAULT_EXCLUDE_PATHS** to filter out common system and dependency paths:\n\n```python\nDEFAULT_EXCLUDE_PATHS = [\n    \".venv\",           # Virtual environment packages\n    \"site-packages\",   # Python package installations\n    \".local\",          # User-local Python installations\n    \"/builds/\",        # CI/CD build directories\n    \"/root/.local\",    # Root user local packages\n    \"/usr/lib/python\", # System Python libraries\n    \"/opt/python\",     # Optional Python installations\n    \"/__pycache__/\",   # Python bytecode cache\n    \".cache\",          # Various cache directories\n    \"/tmp/\",           # Temporary files\n]\n```\n\n#### Usage Examples\n\n```python\n# Use default filtering (recommended for most cases)\nawait client.call_tool(\"get_file_errors\", {\n    \"project_id\": \"123\",\n    \"job_id\": 76474190,\n    \"file_path\": \"src/my_module.py\"\n})\n\n# Disable traceback completely for clean error summaries\nawait client.call_tool(\"get_file_errors\", {\n    \"project_id\": \"123\",\n    \"job_id\": 76474190,\n    \"file_path\": \"src/my_module.py\",\n    \"include_traceback\": False\n})\n\n# Custom path filtering\nawait client.call_tool(\"get_file_errors\", {\n    \"project_id\": \"123\",\n    \"job_id\": 76474190,\n    \"file_path\": \"src/my_module.py\",\n    \"exclude_paths\": [\".venv\", \"site-packages\", \"/builds/\"]\n})\n\n# Get complete traceback (no filtering)\nawait client.call_tool(\"get_file_errors\", {\n    \"project_id\": \"123\",\n    \"job_id\": 76474190,\n    \"file_path\": \"src/my_module.py\",\n    \"exclude_paths\": []  # Empty list = no filtering\n})\n```\n\n#### Benefits\n\n- **Reduced Response Size**: Filter out irrelevant system paths to focus on application code\n- **Faster Analysis**: Smaller responses mean faster processing and analysis\n- **Cleaner Debugging**: Focus on your code without noise from dependencies and system libraries\n- **Flexible Control**: Choose between default filtering, custom patterns, or complete traceback\n\n## Usage Examples\n\n### Version 0.8.0 New Features\n\n#### \ud83d\ude80 Merge Request Pipeline Analysis with Code Review Integration\n\n```python\nimport asyncio\nfrom fastmcp import Client\n\nasync def analyze_mr_pipeline_with_reviews():\n    \"\"\"Analyze a merge request pipeline with v0.8.0 features: MR context and Jira tickets\"\"\"\n    client = Client(\"gitlab_analyzer.py\")\n    async with client:\n        # Analyze failed MR pipeline - now includes MR context and Jira tickets\n        result = await client.call_tool(\"failed_pipeline_analysis\", {\n            \"project_id\": \"83\",\n            \"pipeline_id\": 1594344\n        })\n\n        # Check if this was a merge request pipeline\n        if result.get(\"pipeline_type\") == \"merge_request\":\n            print(\"\ud83d\udd00 Merge Request Pipeline:\")\n            print(f\"   Title: {result['merge_request']['title']}\")\n            print(f\"   Source \u2192 Target: {result['source_branch']} \u2192 {result['target_branch']}\")\n\n            # Show Jira tickets extracted from MR - NEW in v0.8.0!\n            jira_tickets = result.get(\"jira_tickets\", [])\n            if jira_tickets:\n                print(f\"\ud83c\udfab Jira Tickets: {', '.join(jira_tickets)}\")\n        else:\n            print(\"\ud83c\udf3f Branch Pipeline:\")\n            print(f\"   Branch: {result['source_branch']}\")\n            print(\"   (No MR data included for branch pipelines)\")\n\n        print(f\"\ud83d\udcca Status: {result.get('status')}\")\n\nasyncio.run(analyze_mr_pipeline_with_reviews())\n```\n\n#### \ud83d\udd0d Code Review Context for Intelligent Fixes\n\nThe enhanced pipeline analysis now provides crucial code review context that can be used by AI agents to understand:\n\n- **Review Feedback**: What issues reviewers identified before the pipeline failed\n- **Unresolved Discussions**: Outstanding concerns that may be related to the failure\n- **Approval Status**: Whether the code has reviewer approval despite CI failures\n- **Code Quality Concerns**: Specific feedback about code structure, performance, or maintainability\n\nThis context enables more intelligent automated fixes by understanding both the technical failure and the human review feedback.\n\n### Version 0.8.2 New Features\n\n#### \ud83d\udcdd Code Review Integration\n\nBuilding on the v0.8.0 MR context, v0.8.2 adds comprehensive code review integration to provide AI agents with human review feedback alongside technical failure data.\n\n#### \ud83d\udcca Example v0.8.0 Pipeline Resource\n\n```json\n{\n  \"pipeline_type\": \"merge_request\",\n  \"merge_request\": {\n    \"iid\": 123,\n    \"title\": \"[PROJ-456] Fix user authentication bug\",\n    \"description\": \"Resolves PROJ-456 by updating token validation\",\n    \"source_branch\": \"feature/fix-auth\",\n    \"target_branch\": \"main\"\n  },\n  \"jira_tickets\": [\"PROJ-456\"]\n  // ... other pipeline data\n}\n```\n\n#### \ud83c\udfaf Smart MR Data Filtering\n\nThe analyzer now intelligently filters data based on pipeline type:\n\n```python\n# For Merge Request pipelines (refs/merge-requests/123/head)\n{\n    \"pipeline_type\": \"merge_request\",\n    \"merge_request\": {\n        \"iid\": 123,\n        \"title\": \"[PROJ-456] Fix user authentication bug\",\n        \"description\": \"Resolves PROJ-456 by updating token validation\",\n        \"source_branch\": \"feature/fix-auth\",\n        \"target_branch\": \"main\"\n    },\n    \"jira_tickets\": [\"PROJ-456\"],\n    # ... other pipeline data\n}\n\n# For Branch pipelines (refs/heads/main)\n{\n    \"pipeline_type\": \"branch\",\n    \"source_branch\": \"main\",\n    # No merge_request or jira_tickets fields included\n    # ... other pipeline data\n}\n```\n\n#### \ud83d\udd0d Jira Ticket Extraction\n\n```python\nfrom gitlab_analyzer.utils.jira_utils import extract_jira_tickets\n\n# Supports multiple formats\ntext = \"\"\"\n[PROJ-123] Fix authentication bug\nResolves MMGPP-456 and #TEAM-789\nAlso fixes (CORE-101) issue\n\"\"\"\n\ntickets = extract_jira_tickets(text)\n# Returns: [\"PROJ-123\", \"MMGPP-456\", \"TEAM-789\", \"CORE-101\"]\n```\n\n#### \ud83d\udcdd Code Review Integration (v0.8.2+)\n\nGitLab MCP Analyzer now automatically includes human review feedback for Merge Request pipelines, providing AI agents with crucial context about code quality concerns:\n\n```python\n# Analysis of MR pipeline automatically includes review data\nresult = await client.call_tool(\"failed_pipeline_analysis\", {\n    \"project_id\": \"83\",\n    \"pipeline_id\": 1594344\n})\n\n# Review data is included in pipeline analysis\nreview_data = result.get(\"review_summary\", {})\nprint(f\"Approval Status: {review_data.get('approval_status', 'unknown')}\")\nprint(f\"Unresolved Discussions: {review_data.get('unresolved_discussions_count', 0)}\")\nprint(f\"Review Comments: {review_data.get('review_comments_count', 0)}\")\n\n# Access detailed feedback\nfor discussion in review_data.get(\"discussions\", []):\n    if not discussion.get(\"resolved\", True):\n        print(f\"\ud83d\udd0d Unresolved: {discussion.get('body', 'No content')}\")\n        for note in discussion.get(\"notes\", []):\n            if note.get(\"type\") == \"suggestion\":\n                print(f\"\ud83d\udca1 Suggestion: {note.get('body', 'No content')}\")\n```\n\n**Review Integration Features:**\n\n- **Approval Status**: Tracks MR approval state (approved, unapproved, requires_approval)\n- **Discussion Context**: Captures all MR discussions including unresolved items\n- **Code Suggestions**: Includes inline code suggestions from reviewers\n- **Review Notes**: Aggregates all review comments and feedback\n- **Quality Concerns**: Highlights code quality issues raised by humans\n\nThis enables AI agents to understand not just what failed technically, but also what human reviewers have identified as concerns, leading to more contextually appropriate automated fixes.\n\n## Example\n\n```python\nimport asyncio\nfrom fastmcp import Client\n\nasync def analyze_pipeline():\n    \"\"\"Example: Analyze a failed pipeline with v0.8.2 features including code review\"\"\"\n    client = Client(\"gitlab_analyzer.py\")\n    async with client:\n        # Try to get existing pipeline data first (recommended v0.8.0+ workflow)\n        try:\n            result = await client.call_tool(\"get_mcp_resource\", {\n                \"resource_uri\": \"gl://pipeline/83/1594344\"\n            })\n            print(\"\u2705 Found cached pipeline data\")\n        except Exception:\n            # If not analyzed yet, run full analysis\n            result = await client.call_tool(\"failed_pipeline_analysis\", {\n                \"project_id\": \"83\",\n                \"pipeline_id\": 1594344\n            })\n            print(\"\ud83d\udd04 Performed new pipeline analysis\")\n\n        # Check pipeline type and show appropriate information\n        if result.get(\"pipeline_type\") == \"merge_request\":\n            mr_info = result.get(\"merge_request\", {})\n            print(f\"\ud83d\udd00 MR: {mr_info.get('title', 'Unknown')}\")\n\n            # Show Jira ticket context\n            jira_tickets = result.get(\"jira_tickets\", [])\n            if jira_tickets:\n                print(f\"\ud83c\udfab Jira: {', '.join(jira_tickets)}\")\n\n            # Show code review context (v0.8.2+)\n            review_data = result.get(\"review_summary\", {})\n            if review_data:\n                approval = review_data.get(\"approval_status\", \"unknown\")\n                unresolved = review_data.get(\"unresolved_discussions_count\", 0)\n                comments = review_data.get(\"review_comments_count\", 0)\n\n                print(f\"\ud83d\udcdd Review Status: {approval}\")\n                if unresolved > 0:\n                    print(f\"\ud83d\udd0d Unresolved Issues: {unresolved}\")\n                if comments > 0:\n                    print(f\"\ud83d\udcac Review Comments: {comments}\")\n\n                # Show human feedback for AI context\n                discussions = review_data.get(\"discussions\", [])\n                unresolved_feedback = [d for d in discussions if not d.get(\"resolved\", True)]\n                if unresolved_feedback:\n                    print(\"\\n\ud83d\udea8 Human Review Concerns:\")\n                    for discussion in unresolved_feedback[:3]:  # Show first 3\n                        body = discussion.get(\"body\", \"\")[:100]\n                        print(f\"   \u2022 {body}...\")\n        else:\n            print(f\"\ud83c\udf3f Branch: {result.get('source_branch', 'Unknown')}\")\n\n        print(f\"\ud83d\udcca Status: {result.get('status')}\")\n\nasyncio.run(analyze_pipeline())\n```\n\n## Environment Setup\n\nCreate a `.env` file with your GitLab configuration:\n\n```env\nGITLAB_URL=https://gitlab.com\nGITLAB_TOKEN=your-personal-access-token\n```\n\n## Development\n\n```bash\n# Install development dependencies\nuv sync\n\n# Run tests\nuv run pytest\n\n# Run linting and type checking\nuv run tox -e lint,type\n\n# Run all quality checks\nuv run tox\n```\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Author\n\n**Siarhei Skuratovich**\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Run the test suite\n5. Submit a pull request\n\nFor maintainers preparing releases, see [DEPLOYMENT.md](DEPLOYMENT.md) for detailed deployment preparation steps.\n\n---\n\n**Note**: This MCP server is designed to work with GitLab CI/CD pipelines and requires appropriate API access tokens.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "FastMCP server for analyzing GitLab CI/CD pipeline failures",
    "version": "0.10.0",
    "project_urls": null,
    "split_keywords": [
        "gitlab",
        " ci/cd",
        " pipeline",
        " analysis",
        " mcp",
        " fastmcp"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "87da3072e98619c1a5c3b8e3109246368402f4f5b16115409d4be3a6246a6b36",
                "md5": "2e45178d38d1cecb57eae541eaa4e291",
                "sha256": "66aee241881b43caad9f7e0bf7aafc8f4c635f87ba593af6ce0b7fbe6c96f04a"
            },
            "downloads": -1,
            "filename": "gitlab_pipeline_analyzer-0.10.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2e45178d38d1cecb57eae541eaa4e291",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 172387,
            "upload_time": "2025-09-08T15:31:43",
            "upload_time_iso_8601": "2025-09-08T15:31:43.571006Z",
            "url": "https://files.pythonhosted.org/packages/87/da/3072e98619c1a5c3b8e3109246368402f4f5b16115409d4be3a6246a6b36/gitlab_pipeline_analyzer-0.10.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "70dfa029f6fef13517d4156a2dbd0a8848520f0187f8717e5c75d9979e11acd9",
                "md5": "e5ef341ab09a472c5bf54a48648f1bb7",
                "sha256": "a6bc55735283c8efaade43d6f91404d6f7c6fb45668b931a26f31afd3d0b7348"
            },
            "downloads": -1,
            "filename": "gitlab_pipeline_analyzer-0.10.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e5ef341ab09a472c5bf54a48648f1bb7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 253427,
            "upload_time": "2025-09-08T15:31:45",
            "upload_time_iso_8601": "2025-09-08T15:31:45.151870Z",
            "url": "https://files.pythonhosted.org/packages/70/df/a029f6fef13517d4156a2dbd0a8848520f0187f8717e5c75d9979e11acd9/gitlab_pipeline_analyzer-0.10.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-08 15:31:45",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "gitlab-pipeline-analyzer"
}
        
Elapsed time: 2.17329s