consult7


Nameconsult7 JSON
Version 2.1.0 PyPI version JSON
download
home_pageNone
SummaryMCP server for consulting large context window models to analyze extensive file collections
upload_time2025-09-01 21:19:12
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT
keywords code-analysis large-context llm mcp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Consult7 MCP Server

**Consult7** is a Model Context Protocol (MCP) server that enables AI agents to consult large context window models for analyzing extensive file collections - entire codebases, document repositories, or mixed content that exceed the current agent's context limits. Supports providers *Openrouter*, *OpenAI*, and *Google*.

## Why Consult7?

When working with AI agents that have limited context windows (like Claude with 200K tokens), **Consult7** allows them to leverage models with massive context windows to analyze large codebases or document collections that would otherwise be impossible to process in a single query.

> "For Claude Code users, Consult7 is a game changer."

## How it works

**Consult7** collects files from the specific paths you provide (with optional wildcards in filenames), assembles them into a single context, and sends them to a large context window model along with your query. The result is directly fed back to the agent you are working with.

## Example Use Cases

### Summarize an entire codebase

* **Files:** `["/Users/john/project/src/*.py", "/Users/john/project/lib/*.py"]`
* **Query:** "Summarize the architecture and main components of this Python project"
* **Model:** `"gemini-2.5-flash"`

### Find specific method definitions
* **Files:** `["/Users/john/backend/src/*.py", "/Users/john/backend/auth/*.js"]`
* **Query:** "Find the implementation of the authenticate_user method and explain how it handles password verification"
* **Model:** `"gemini-2.5-pro"`

### Analyze test coverage
* **Files:** `["/Users/john/project/tests/*_test.py", "/Users/john/project/src/*.py"]`
* **Query:** "List all the test files and identify which components lack test coverage"
* **Model:** `"gemini-2.5-flash"`

### Complex analysis with thinking mode
* **Files:** `["/Users/john/webapp/src/*.py", "/Users/john/webapp/auth/*.py", "/Users/john/webapp/api/*.js"]`
* **Query:** "Analyze the authentication flow across this codebase. Think step by step about security vulnerabilities and suggest improvements"
* **Model:** `"gemini-2.5-flash|thinking"`

### Generate a report saved to file
* **Files:** `["/Users/john/project/src/*.py", "/Users/john/project/tests/*.py"]`
* **Query:** "Generate a comprehensive code review report with architecture analysis, code quality assessment, and improvement recommendations"
* **Model:** `"gemini-2.5-pro"`
* **Output File:** `"/Users/john/reports/code_review.md"`
* **Result:** Returns `"Result has been saved to /Users/john/reports/code_review.md"` instead of flooding the agent's context

## Installation

### Claude Code

Simply run:

```bash
# OpenRouter
claude mcp add -s user consult7 uvx -- consult7 openrouter your-api-key

# Google AI
claude mcp add -s user consult7 uvx -- consult7 google your-api-key

# OpenAI
claude mcp add -s user consult7 uvx -- consult7 openai your-api-key
```

### Claude Desktop

Add to your Claude Desktop configuration file:

```json
{
  "mcpServers": {
    "consult7": {
      "type": "stdio",
      "command": "uvx",
      "args": ["consult7", "openrouter", "your-api-key"]
    }
  }
}
```

Replace `openrouter` with your provider choice (`google` or `openai`) and `your-api-key` with your actual API key.

No installation required - `uvx` automatically downloads and runs consult7 in an isolated environment.


## Command Line Options

```bash
uvx consult7 <provider> <api-key> [--test]
```

- `<provider>`: Required. Choose from `openrouter`, `google`, or `openai`
- `<api-key>`: Required. Your API key for the chosen provider
- `--test`: Optional. Test the API connection

The model is specified when calling the tool, not at startup. The server shows example models for your provider on startup.

### Model Examples

#### Google
Standard models:
- `"gemini-2.5-flash"` - Fast model
- `"gemini-2.5-flash-lite"` - Ultra fast lite model
- `"gemini-2.5-pro"` - Intelligent model
- `"gemini-2.0-flash-exp"` - Experimental model

With thinking mode (add `|thinking` suffix):
- `"gemini-2.5-flash|thinking"` - Fast with deep reasoning
- `"gemini-2.5-flash-lite|thinking"` - Ultra fast with deep reasoning
- `"gemini-2.5-pro|thinking"` - Intelligent with deep reasoning

#### OpenRouter
Standard models:
- `"google/gemini-2.5-pro"` - Intelligent, 1M context
- `"google/gemini-2.5-flash"` - Fast, 1M context
- `"google/gemini-2.5-flash-lite"` - Ultra fast, 1M context
- `"anthropic/claude-sonnet-4"` - Claude Sonnet, 200k context
- `"anthropic/claude-opus-4.1"` - Claude Opus 4.1, 200k context
- `"openai/gpt-5"` - GPT-5, 400k context
- `"openai/gpt-4.1"` - GPT-4.1, 1M+ context

With reasoning mode (add `|thinking` suffix):
- `"anthropic/claude-sonnet-4|thinking"` - Claude with 31,999 reasoning tokens
- `"anthropic/claude-opus-4.1|thinking"` - Opus 4.1 with reasoning
- `"google/gemini-2.5-flash-lite|thinking"` - Ultra fast with reasoning
- `"openai/gpt-5|thinking"` - GPT-5 with reasoning
- `"openai/gpt-4.1|thinking"` - GPT-4.1 with reasoning effort=high

#### OpenAI
Standard models (include context length):
- `"gpt-5|400k"` - GPT-5, 400k context
- `"gpt-5-mini|400k"` - GPT-5 Mini, faster
- `"gpt-5-nano|400k"` - GPT-5 Nano, ultra fast
- `"gpt-4.1-2025-04-14|1047576"` - 1M+ context, very fast
- `"gpt-4.1-nano-2025-04-14|1047576"` - 1M+ context, ultra fast
- `"o3-2025-04-16|200k"` - Advanced reasoning model
- `"o4-mini-2025-04-16|200k"` - Fast reasoning model

O-series models with |thinking marker:
- `"o1-mini|128k|thinking"` - Mini reasoning with |thinking marker
- `"o3-2025-04-16|200k|thinking"` - Advanced reasoning with |thinking marker

**Note:** For OpenAI, |thinking is only supported on o-series models and serves as an informational marker. The models use reasoning tokens automatically.

**Advanced:** You can specify custom thinking tokens with `|thinking=30000` but this is rarely needed. 

## File Specification Rules

When using the consultation tool, you provide a list of file paths with these rules:

1. **All paths must be absolute** (start with `/`)
   - ✅ Good: `/Users/john/project/src/*.py`
   - ❌ Bad: `src/*.py` or `./src/*.py`

2. **Wildcards (`*`) only allowed in filenames**, not in directory paths
   - ✅ Good: `/Users/john/project/*.py`
   - ❌ Bad: `/Users/*/project/*.py` or `/Users/john/**/*.py`

3. **Must specify extension when using wildcards**
   - ✅ Good: `/Users/john/project/*.py`
   - ❌ Bad: `/Users/john/project/*`

4. **Mix specific files and patterns freely**
   - ✅ Good: `["/path/src/*.py", "/path/README.md", "/path/tests/*_test.py"]`

5. **Common patterns:**
   - All Python files in a directory: `/path/to/dir/*.py`
   - Test files: `/path/to/tests/*_test.py` or `/path/to/tests/test_*.py`
   - Multiple extensions: Use multiple patterns like `["/path/*.js", "/path/*.ts"]`

The tool automatically ignores: `__pycache__`, `.env`, `secrets.py`, `.DS_Store`, `.git`, `node_modules`

**Size limits:** 1MB per file, 4MB total (optimized for ~1M token context windows)

## Tool Parameters

The consultation tool accepts the following parameters:

- **files** (required): List of absolute file paths or patterns with wildcards in filenames only
- **query** (required): Your question or instruction for the LLM to process the files
- **model** (required): The LLM model to use (see Model Examples above for each provider)
- **output_file** (optional): Absolute path to save the response to a file instead of returning it
  - If the file exists, it will be saved with `_updated` suffix (e.g., `report.md` → `report_updated.md`)
  - When specified, returns only: `"Result has been saved to /path/to/file"`
  - Useful for generating reports, documentation, or analyses without flooding the agent's context

## Testing

```bash
# Test OpenRouter
uvx consult7 openrouter sk-or-v1-... --test

# Test Google AI
uvx consult7 google AIza... --test

# Test OpenAI
uvx consult7 openai sk-proj-... --test
```

## Uninstalling

To remove consult7 from Claude Code (or before reinstalling):

```bash
claude mcp remove consult7 -s user
```


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "consult7",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "code-analysis, large-context, llm, mcp",
    "author": null,
    "author_email": "Stefan Szeider <stefan@szeider.net>",
    "download_url": "https://files.pythonhosted.org/packages/85/ca/75b222616c9d7e9ec0fcdd30592a1bea8ee0c1793d81c20acb5ad3458697/consult7-2.1.0.tar.gz",
    "platform": null,
    "description": "# Consult7 MCP Server\n\n**Consult7** is a Model Context Protocol (MCP) server that enables AI agents to consult large context window models for analyzing extensive file collections - entire codebases, document repositories, or mixed content that exceed the current agent's context limits. Supports providers *Openrouter*, *OpenAI*, and *Google*.\n\n## Why Consult7?\n\nWhen working with AI agents that have limited context windows (like Claude with 200K tokens), **Consult7** allows them to leverage models with massive context windows to analyze large codebases or document collections that would otherwise be impossible to process in a single query.\n\n> \"For Claude Code users, Consult7 is a game changer.\"\n\n## How it works\n\n**Consult7** collects files from the specific paths you provide (with optional wildcards in filenames), assembles them into a single context, and sends them to a large context window model along with your query. The result is directly fed back to the agent you are working with.\n\n## Example Use Cases\n\n### Summarize an entire codebase\n\n* **Files:** `[\"/Users/john/project/src/*.py\", \"/Users/john/project/lib/*.py\"]`\n* **Query:** \"Summarize the architecture and main components of this Python project\"\n* **Model:** `\"gemini-2.5-flash\"`\n\n### Find specific method definitions\n* **Files:** `[\"/Users/john/backend/src/*.py\", \"/Users/john/backend/auth/*.js\"]`\n* **Query:** \"Find the implementation of the authenticate_user method and explain how it handles password verification\"\n* **Model:** `\"gemini-2.5-pro\"`\n\n### Analyze test coverage\n* **Files:** `[\"/Users/john/project/tests/*_test.py\", \"/Users/john/project/src/*.py\"]`\n* **Query:** \"List all the test files and identify which components lack test coverage\"\n* **Model:** `\"gemini-2.5-flash\"`\n\n### Complex analysis with thinking mode\n* **Files:** `[\"/Users/john/webapp/src/*.py\", \"/Users/john/webapp/auth/*.py\", \"/Users/john/webapp/api/*.js\"]`\n* **Query:** \"Analyze the authentication flow across this codebase. Think step by step about security vulnerabilities and suggest improvements\"\n* **Model:** `\"gemini-2.5-flash|thinking\"`\n\n### Generate a report saved to file\n* **Files:** `[\"/Users/john/project/src/*.py\", \"/Users/john/project/tests/*.py\"]`\n* **Query:** \"Generate a comprehensive code review report with architecture analysis, code quality assessment, and improvement recommendations\"\n* **Model:** `\"gemini-2.5-pro\"`\n* **Output File:** `\"/Users/john/reports/code_review.md\"`\n* **Result:** Returns `\"Result has been saved to /Users/john/reports/code_review.md\"` instead of flooding the agent's context\n\n## Installation\n\n### Claude Code\n\nSimply run:\n\n```bash\n# OpenRouter\nclaude mcp add -s user consult7 uvx -- consult7 openrouter your-api-key\n\n# Google AI\nclaude mcp add -s user consult7 uvx -- consult7 google your-api-key\n\n# OpenAI\nclaude mcp add -s user consult7 uvx -- consult7 openai your-api-key\n```\n\n### Claude Desktop\n\nAdd to your Claude Desktop configuration file:\n\n```json\n{\n  \"mcpServers\": {\n    \"consult7\": {\n      \"type\": \"stdio\",\n      \"command\": \"uvx\",\n      \"args\": [\"consult7\", \"openrouter\", \"your-api-key\"]\n    }\n  }\n}\n```\n\nReplace `openrouter` with your provider choice (`google` or `openai`) and `your-api-key` with your actual API key.\n\nNo installation required - `uvx` automatically downloads and runs consult7 in an isolated environment.\n\n\n## Command Line Options\n\n```bash\nuvx consult7 <provider> <api-key> [--test]\n```\n\n- `<provider>`: Required. Choose from `openrouter`, `google`, or `openai`\n- `<api-key>`: Required. Your API key for the chosen provider\n- `--test`: Optional. Test the API connection\n\nThe model is specified when calling the tool, not at startup. The server shows example models for your provider on startup.\n\n### Model Examples\n\n#### Google\nStandard models:\n- `\"gemini-2.5-flash\"` - Fast model\n- `\"gemini-2.5-flash-lite\"` - Ultra fast lite model\n- `\"gemini-2.5-pro\"` - Intelligent model\n- `\"gemini-2.0-flash-exp\"` - Experimental model\n\nWith thinking mode (add `|thinking` suffix):\n- `\"gemini-2.5-flash|thinking\"` - Fast with deep reasoning\n- `\"gemini-2.5-flash-lite|thinking\"` - Ultra fast with deep reasoning\n- `\"gemini-2.5-pro|thinking\"` - Intelligent with deep reasoning\n\n#### OpenRouter\nStandard models:\n- `\"google/gemini-2.5-pro\"` - Intelligent, 1M context\n- `\"google/gemini-2.5-flash\"` - Fast, 1M context\n- `\"google/gemini-2.5-flash-lite\"` - Ultra fast, 1M context\n- `\"anthropic/claude-sonnet-4\"` - Claude Sonnet, 200k context\n- `\"anthropic/claude-opus-4.1\"` - Claude Opus 4.1, 200k context\n- `\"openai/gpt-5\"` - GPT-5, 400k context\n- `\"openai/gpt-4.1\"` - GPT-4.1, 1M+ context\n\nWith reasoning mode (add `|thinking` suffix):\n- `\"anthropic/claude-sonnet-4|thinking\"` - Claude with 31,999 reasoning tokens\n- `\"anthropic/claude-opus-4.1|thinking\"` - Opus 4.1 with reasoning\n- `\"google/gemini-2.5-flash-lite|thinking\"` - Ultra fast with reasoning\n- `\"openai/gpt-5|thinking\"` - GPT-5 with reasoning\n- `\"openai/gpt-4.1|thinking\"` - GPT-4.1 with reasoning effort=high\n\n#### OpenAI\nStandard models (include context length):\n- `\"gpt-5|400k\"` - GPT-5, 400k context\n- `\"gpt-5-mini|400k\"` - GPT-5 Mini, faster\n- `\"gpt-5-nano|400k\"` - GPT-5 Nano, ultra fast\n- `\"gpt-4.1-2025-04-14|1047576\"` - 1M+ context, very fast\n- `\"gpt-4.1-nano-2025-04-14|1047576\"` - 1M+ context, ultra fast\n- `\"o3-2025-04-16|200k\"` - Advanced reasoning model\n- `\"o4-mini-2025-04-16|200k\"` - Fast reasoning model\n\nO-series models with |thinking marker:\n- `\"o1-mini|128k|thinking\"` - Mini reasoning with |thinking marker\n- `\"o3-2025-04-16|200k|thinking\"` - Advanced reasoning with |thinking marker\n\n**Note:** For OpenAI, |thinking is only supported on o-series models and serves as an informational marker. The models use reasoning tokens automatically.\n\n**Advanced:** You can specify custom thinking tokens with `|thinking=30000` but this is rarely needed. \n\n## File Specification Rules\n\nWhen using the consultation tool, you provide a list of file paths with these rules:\n\n1. **All paths must be absolute** (start with `/`)\n   - \u2705 Good: `/Users/john/project/src/*.py`\n   - \u274c Bad: `src/*.py` or `./src/*.py`\n\n2. **Wildcards (`*`) only allowed in filenames**, not in directory paths\n   - \u2705 Good: `/Users/john/project/*.py`\n   - \u274c Bad: `/Users/*/project/*.py` or `/Users/john/**/*.py`\n\n3. **Must specify extension when using wildcards**\n   - \u2705 Good: `/Users/john/project/*.py`\n   - \u274c Bad: `/Users/john/project/*`\n\n4. **Mix specific files and patterns freely**\n   - \u2705 Good: `[\"/path/src/*.py\", \"/path/README.md\", \"/path/tests/*_test.py\"]`\n\n5. **Common patterns:**\n   - All Python files in a directory: `/path/to/dir/*.py`\n   - Test files: `/path/to/tests/*_test.py` or `/path/to/tests/test_*.py`\n   - Multiple extensions: Use multiple patterns like `[\"/path/*.js\", \"/path/*.ts\"]`\n\nThe tool automatically ignores: `__pycache__`, `.env`, `secrets.py`, `.DS_Store`, `.git`, `node_modules`\n\n**Size limits:** 1MB per file, 4MB total (optimized for ~1M token context windows)\n\n## Tool Parameters\n\nThe consultation tool accepts the following parameters:\n\n- **files** (required): List of absolute file paths or patterns with wildcards in filenames only\n- **query** (required): Your question or instruction for the LLM to process the files\n- **model** (required): The LLM model to use (see Model Examples above for each provider)\n- **output_file** (optional): Absolute path to save the response to a file instead of returning it\n  - If the file exists, it will be saved with `_updated` suffix (e.g., `report.md` \u2192 `report_updated.md`)\n  - When specified, returns only: `\"Result has been saved to /path/to/file\"`\n  - Useful for generating reports, documentation, or analyses without flooding the agent's context\n\n## Testing\n\n```bash\n# Test OpenRouter\nuvx consult7 openrouter sk-or-v1-... --test\n\n# Test Google AI\nuvx consult7 google AIza... --test\n\n# Test OpenAI\nuvx consult7 openai sk-proj-... --test\n```\n\n## Uninstalling\n\nTo remove consult7 from Claude Code (or before reinstalling):\n\n```bash\nclaude mcp remove consult7 -s user\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "MCP server for consulting large context window models to analyze extensive file collections",
    "version": "2.1.0",
    "project_urls": {
        "Homepage": "https://github.com/szeider/consult7",
        "Issues": "https://github.com/szeider/consult7/issues",
        "Repository": "https://github.com/szeider/consult7"
    },
    "split_keywords": [
        "code-analysis",
        " large-context",
        " llm",
        " mcp"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4e8f1555d0a626d4dc18bc967242b25cb9257acfc3135a99d51a9dd7c5cdd0f2",
                "md5": "9c57e8b652f0d58a43b2ec1bdfabdbaf",
                "sha256": "904c6328238d056f09a8fafb395ae2fa44d052a390e515e64066ac552505c581"
            },
            "downloads": -1,
            "filename": "consult7-2.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c57e8b652f0d58a43b2ec1bdfabdbaf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 27813,
            "upload_time": "2025-09-01T21:19:11",
            "upload_time_iso_8601": "2025-09-01T21:19:11.551994Z",
            "url": "https://files.pythonhosted.org/packages/4e/8f/1555d0a626d4dc18bc967242b25cb9257acfc3135a99d51a9dd7c5cdd0f2/consult7-2.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "85ca75b222616c9d7e9ec0fcdd30592a1bea8ee0c1793d81c20acb5ad3458697",
                "md5": "93e6ba56ff709f46ca7bbbb6c1487b37",
                "sha256": "c683278a9bc4575367009939483254173eab28a55e7201f93111ebc26fa3f691"
            },
            "downloads": -1,
            "filename": "consult7-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "93e6ba56ff709f46ca7bbbb6c1487b37",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 20717,
            "upload_time": "2025-09-01T21:19:12",
            "upload_time_iso_8601": "2025-09-01T21:19:12.679312Z",
            "url": "https://files.pythonhosted.org/packages/85/ca/75b222616c9d7e9ec0fcdd30592a1bea8ee0c1793d81c20acb5ad3458697/consult7-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-01 21:19:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "szeider",
    "github_project": "consult7",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "consult7"
}
        
Elapsed time: 1.82572s