knowledge-mcp


Nameknowledge-mcp JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
SummaryA MCP server designed to bridge the gap between specialized knowledge domains and AI assistants.
upload_time2025-07-12 14:42:36
maintainerNone
docs_urlNone
authorNone
requires_python<3.13,>=3.12
licenseMIT
keywords cli fastmcp knowledge lightrag mcp search
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # knowledge-mcp: Specialized Knowledge Bases for AI Agents

## 1. Overview and Concept

**knowledge-mcp** is a MCP server designed to bridge the gap between specialized knowledge domains and AI assistants. It allows users to create, manage, and query dedicated knowledge bases, making this information accessible to AI agents through an MCP (Model Context Protocol) server interface.

The core idea is to empower AI assistants that are MCP clients (like Claude Desktop or IDEs like Windsurf) to proactively consult these specialized knowledge bases during their reasoning process (Chain of Thought), rather than relying solely on general semantic search against user prompts or broad web searches. This enables more accurate, context-aware responses when dealing with specific domains.

Key components:

*   **CLI Tool:** Provides a user-friendly command-line interface for managing knowledge bases (creating, deleting, adding/removing documents, configuring, searching).
*   **Knowledge Base Engine:** Leverages **LightRAG** to handle document processing, embedding, knowledge graph creation, and complex querying.
*   **MCP Server:** Exposes the search functionality of the knowledge bases via the FastMCP protocol, allowing compatible AI agents to query them directly.

## 2. About LightRAG

This project utilizes LightRAG ([HKUDS/LightRAG](https://github.com/HKUDS/LightRAG)) as its core engine for knowledge base creation and querying. LightRAG is a powerful framework designed to enhance Large Language Models (LLMs) by integrating Retrieval-Augmented Generation (RAG) with knowledge graph techniques.

Key features of LightRAG relevant to this project:

*   **Document Processing Pipeline:** Ingests documents (PDF, Text, Markdown, DOCX), chunks them, extracts entities and relationships using an LLM, and builds both a knowledge graph and vector embeddings.
*   **Multiple Query Modes:** Supports various retrieval strategies (e.g., vector similarity, entity-centric, relationship-focused, hybrid) to find the most relevant context for a given query.
*   **Flexible Storage:** Can use different backends for storing key-value data, vectors, graph information, and document status (this project uses the default file-based storage).
*   **LLM/Embedding Integration:** Supports various providers like OpenAI (used in this project), Ollama, Hugging Face, etc.

By using LightRAG, `knowledge-mcp` benefits from advanced RAG capabilities that go beyond simple vector search.

## 3. Installation

Ensure you have Python 3.12 and `uv` installed.

1.  **Running the Tool:** After installing the package (e.g., using `uv pip install -e .`), you can run the CLI using `uvx`:
    ```bash
    # General command structure
    uvx knowledge-mcp --config <path-to-your-config.yaml> <command> [arguments...]

    # Example: Start interactive shell
    uvx knowledge-mcp --config <path-to-your-config.yaml> shell
    ```

2.  **Configure MCP Client:** To allow an MCP client (like Claude Desktop or Windsurf) to connect to this server, configure the client with the following settings. Replace the config path with the absolute path to your main `config.yaml`.
    ```json
    {
      "mcpServers": {
        "knowledge-mcp": {
          "command": "uvx",
          "args": [
            "knowledge-mcp",
            "--config",
            "<absolute-path-to-your-config.yaml>",
            "mcp"
          ]
        }
      }
    }
    ```

3.  **Set up configuration:**
    *   Copy `config.example.yaml` to `config.yaml`.
    *   Copy `.env.example` to `.env`.
    *   Edit `config.yaml` and `.env` to add your API keys (e.g., `OPENAI_API_KEY`) and adjust paths or settings as needed. The `knowledge_base.base_dir` in `config.yaml` specifies where your knowledge base directories will be created.

## 4. Configuration

Configuration is managed via YAML files:

1.  **Main Configuration (`config.yaml`):** Defines global settings like the knowledge base directory (`knowledge_base.base_dir`), LightRAG parameters (LLM provider/model, embedding provider/model, API keys via `${ENV_VAR}` substitution), and logging settings. Refer to `config.example.yaml` for the full structure and available options.

    ```yaml
    knowledge_base:
      base_dir: ./kbs

    lightrag:
      llm:
        provider: "openai"
        model_name: "gpt-4.1-nano"
        api_key: "${OPENAI_API_KEY}"
        # ... other LLM settings
      embedding:
        provider: "openai"
        model_name: "text-embedding-3-small"
        api_key: "${OPENAI_API_KEY}"
        # ... other embedding settings
      embedding_cache:
        enabled: true
        similarity_threshold: 0.90

    logging:
      level: "INFO"
      # ... logging settings

    env_file: .env # path to .env file
    ```

2.  **Knowledge Base Specific Configuration (`<base_dir>/<kb_name>/config.yaml`):** Contains parameters specific to querying *that* knowledge base, such as the LightRAG query `mode` (default: "hybrid"), `top_k` results (default: 40), context token limits, `text_only` parsing mode, and `user_prompt` for response formatting. This file is automatically created with defaults when a KB is created and can be viewed/edited using the `config` CLI command.

3.  **Knowledge Base Directory Structure:** When you create knowledge bases, they are stored within the directory specified by `knowledge_base.base_dir` in your main `config.yaml`. The structure typically looks like this:

    ```
    <base_dir>/              # Main directory, contains a set of knowledge bases
    ├── config.yaml          # Main application configuration (copied from config.example.yaml)
    ├── .env                 # Environment variables referenced in config.yaml
    ├── kbmcp.log
    ├── knowledge_base_1/    # Directory for the first KB
    │   ├── config.yaml      # KB-specific configuration (query parameters)
    │   ├── <storage_files>  # The LightRAG storage files
    └── knowledge_base_2/    # Directory for the second KB
        ├── config.yaml
        ├── <storage_files>
    ```

## 5. New Features

### 5.1 Text-Only Document Parsing

By default, knowledge-mcp processes documents using both text content and metadata (like document structure, formatting, etc.). You can now configure knowledge bases to use **text-only parsing** for faster processing and reduced token usage.

**Benefits:**
- Faster document processing
- Lower LLM token consumption
- Simplified content extraction
- Better performance with large document collections

**Configuration:**
Add `text_only: true` to your knowledge base's `config.yaml`:

```yaml
# In <base_dir>/<kb_name>/config.yaml
description: "My knowledge base with text-only parsing"
mode: "hybrid"
top_k: 40
text_only: true  # Enable text-only parsing
```

**Usage:**
```bash
# Create a new KB and configure it for text-only parsing
knowledge-mcp --config config.yaml create my_text_kb
knowledge-mcp --config config.yaml config my_text_kb edit
# Add text_only: true to the config file

# Add documents - they will be processed with text-only parsing
knowledge-mcp --config config.yaml add my_text_kb ./documents/
```

### 5.2 Configurable User Prompts

You can now customize how the LLM formats and structures its responses for each knowledge base by configuring a `user_prompt`. This allows you to tailor the response style to match your specific use case.

**Benefits:**
- Consistent response formatting across queries
- Domain-specific response styles
- Better integration with downstream applications
- Improved user experience

**Configuration:**
Add a `user_prompt` field to your knowledge base's `config.yaml`. The prompt supports multi-line YAML syntax:

```yaml
# In <base_dir>/<kb_name>/config.yaml
description: "Technical documentation KB"
mode: "hybrid"
top_k: 40
user_prompt: |
  Please format your response as follows:
  
  ## Summary
  Provide a brief 2-3 sentence summary of the key points.
  
  ## Detailed Answer
  Give a comprehensive explanation with specific details.
  
  ## Key Takeaways
  - List 3-5 bullet points with the most important insights
  - Focus on actionable information
  
  Keep your response clear, concise, and well-organized.
```

**Example Configurations:**

1. **Business-Focused Format:**
```yaml
user_prompt: |
  Structure your response for business stakeholders:
  
  **Executive Summary** (2-3 sentences)
  Brief overview of the main points and business impact.
  
  **Key Findings**
  • Most critical insights
  • Relevant metrics or data points
  • Risk factors or opportunities
  
  **Recommendations**
  • Specific actionable steps
  • Priority levels (High/Medium/Low)
  • Expected outcomes
```

2. **Technical Documentation Style:**
```yaml
user_prompt: |
  You are a technical documentation expert. Please structure your response with:
  
  1. **Context**: Brief background on the topic
  2. **Implementation**: Step-by-step technical details
  3. **Best Practices**: Recommended approaches and common pitfalls
  4. **Examples**: Concrete code examples or use cases where applicable
  
  Use clear headings, bullet points, and code blocks for readability.
```

3. **Academic Research Style:**
```yaml
user_prompt: |
  Please provide a scholarly response that includes:
  
  • **Introduction**: Context and scope of the topic
  • **Analysis**: Critical examination of key concepts and evidence
  • **Synthesis**: How different pieces of information connect
  • **Conclusion**: Main findings and implications
  
  Support your points with specific references from the knowledge base.
```

**Usage:**
```bash
# Configure user prompt for an existing KB
knowledge-mcp --config config.yaml config my_kb edit
# Add your user_prompt configuration to the YAML file

# Query the KB - responses will follow your configured format
knowledge-mcp --config config.yaml query my_kb "What are the main concepts?"
```

**Notes:**
- User prompts are applied automatically to all queries for that knowledge base
- Leave `user_prompt` empty or omit it to use default LLM behavior
- Changes take effect immediately - no need to rebuild the knowledge base
- Backward compatible - existing knowledge bases continue to work without modification

## 6. Usage (CLI)

The primary way to interact with `knowledge-mcp` is through its CLI, accessed via the `knowledge-mcp` command (if installed globally or via `uvx knowledge-mcp` within the activated venv).

**All commands require the `--config` option pointing to your main configuration file.**

```bash
uv run knowledge-mcp --config /path/to/config.yaml shell
```
**Available Commands (Interactive Shell):**

| Command  | Description                                                                 | Arguments                                                                      |
| :------- | :-------------------------------------------------------------------------- | :----------------------------------------------------------------------------- |
| `create` | Creates a new knowledge base directory and initializes its structure.       | `<name>`: Name of the KB.<br> `["description"]`: Optional description.         |
| `delete` | Deletes an existing knowledge base directory and all its contents.            | `<name>`: Name of the KB to delete.                                          |
| `list`   | Lists all available knowledge bases and their descriptions.                 | N/A                                                                            |
| `add`    | Adds a document: processes, chunks, embeds, stores in the specified KB.     | `<kb_name>`: Target KB.<br>`<file_path>`: Path to the document file.          |
| `remove` | Removes a document and its associated data from the KB by its ID.           | `<kb_name>`: Target KB.<br>`<doc_id>`: ID of the document to remove.         |
| `config` | Manages the KB-specific `config.yaml`. Shows content or opens in editor.    | `<kb_name>`: Target KB.<br>`[show|edit]`: Subcommand (show default).          |
| `query`  | Searches the specified knowledge base using LightRAG.                     | `<kb_name>`: Target KB.<br>`<query_text>`: Your search query text.             |
| `clear`  | Clears the terminal screen.                                                 | N/A                                                                            |
| `exit`   | Exits the interactive shell.                                                | N/A                                                                            |
| `EOF`    | (Ctrl+D) Exits the interactive shell.                                       | N/A                                                                            |
| `help`   | Shows available commands and their usage within the shell.                  | `[command]` (Optional command name)                                            |

**Example (Direct CLI):**

```bash
# Create a knowledge base named 'my_docs'
knowledge-mcp --config config.yaml create my_docs

# Add a document to it
knowledge-mcp --config config.yaml add my_docs ./path/to/mydocument.pdf

# Search the knowledge base
knowledge-mcp --config config.yaml query my_docs "What is the main topic?"

# Start the interactive shell
knowledge-mcp --config config.yaml shell

(kbmcp) list
(kbmcp) query my_docs "Another query"
(kbmcp) exit
```

## 7. Development
1. Project Decisions
*   **Tech Stack:** Python 3.12, uv (dependency management), hatchling (build system), pytest (testing).
*   **Setup:** Follow the installation steps, ensuring you install with `uv pip install -e ".[dev]"`.
*   **Code Style:** Adheres to PEP 8.
*   **Testing:** Run tests using `uvx test` or `pytest`.
*   **Dependencies:** Managed in `pyproject.toml`. Use `uv pip install <package>` to add and `uv pip uninstall <package>` to remove dependencies, updating `pyproject.toml` accordingly.
*   **Scripts:** Common tasks might be defined under `[project.scripts]` in `pyproject.toml`.
*   **Release:** Build `hatch build` and then `twine upload dist/*`.

2. **Test with uvx**
```
    "knowledge-mcp": {
      "command": "uvx",
      "args": [
        "--project",
        "/path/to/knowledge-mcp",
        "knowledge-mcp",
        "--config",
        "/path/to/knowledge-mcp/kbs/config.yaml",
        "mcp"
      ]
    }
```
3. Test with MCP Inspector
```
npx @modelcontextprotocol/inspector uv "run knowledge-mcp --config /path/to/config.yaml mcp"
```
or
```
npx @modelcontextprotocol/inspector uvx --project . knowledge-mcp "--config ./kbs/config.yaml mcp
```
4. Convenience dev scripts
Assumes a local config file at `./kbs/config.yaml`
* `uvx shell` - Starts the interactive shell
* `uvx insp` - Starts the MCP Inspector
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "knowledge-mcp",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.12",
    "maintainer_email": null,
    "keywords": "cli, fastmcp, knowledge, lightrag, mcp, search",
    "author": null,
    "author_email": "Olaf Geibig <olaf.geibig@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/c9/e1/d9195bef2f71ff162668ec1af2bd98854227263ab91f7958438e6dff3578/knowledge_mcp-0.3.0.tar.gz",
    "platform": null,
    "description": "# knowledge-mcp: Specialized Knowledge Bases for AI Agents\n\n## 1. Overview and Concept\n\n**knowledge-mcp** is a MCP server designed to bridge the gap between specialized knowledge domains and AI assistants. It allows users to create, manage, and query dedicated knowledge bases, making this information accessible to AI agents through an MCP (Model Context Protocol) server interface.\n\nThe core idea is to empower AI assistants that are MCP clients (like Claude Desktop or IDEs like Windsurf) to proactively consult these specialized knowledge bases during their reasoning process (Chain of Thought), rather than relying solely on general semantic search against user prompts or broad web searches. This enables more accurate, context-aware responses when dealing with specific domains.\n\nKey components:\n\n*   **CLI Tool:** Provides a user-friendly command-line interface for managing knowledge bases (creating, deleting, adding/removing documents, configuring, searching).\n*   **Knowledge Base Engine:** Leverages **LightRAG** to handle document processing, embedding, knowledge graph creation, and complex querying.\n*   **MCP Server:** Exposes the search functionality of the knowledge bases via the FastMCP protocol, allowing compatible AI agents to query them directly.\n\n## 2. About LightRAG\n\nThis project utilizes LightRAG ([HKUDS/LightRAG](https://github.com/HKUDS/LightRAG)) as its core engine for knowledge base creation and querying. LightRAG is a powerful framework designed to enhance Large Language Models (LLMs) by integrating Retrieval-Augmented Generation (RAG) with knowledge graph techniques.\n\nKey features of LightRAG relevant to this project:\n\n*   **Document Processing Pipeline:** Ingests documents (PDF, Text, Markdown, DOCX), chunks them, extracts entities and relationships using an LLM, and builds both a knowledge graph and vector embeddings.\n*   **Multiple Query Modes:** Supports various retrieval strategies (e.g., vector similarity, entity-centric, relationship-focused, hybrid) to find the most relevant context for a given query.\n*   **Flexible Storage:** Can use different backends for storing key-value data, vectors, graph information, and document status (this project uses the default file-based storage).\n*   **LLM/Embedding Integration:** Supports various providers like OpenAI (used in this project), Ollama, Hugging Face, etc.\n\nBy using LightRAG, `knowledge-mcp` benefits from advanced RAG capabilities that go beyond simple vector search.\n\n## 3. Installation\n\nEnsure you have Python 3.12 and `uv` installed.\n\n1.  **Running the Tool:** After installing the package (e.g., using `uv pip install -e .`), you can run the CLI using `uvx`:\n    ```bash\n    # General command structure\n    uvx knowledge-mcp --config <path-to-your-config.yaml> <command> [arguments...]\n\n    # Example: Start interactive shell\n    uvx knowledge-mcp --config <path-to-your-config.yaml> shell\n    ```\n\n2.  **Configure MCP Client:** To allow an MCP client (like Claude Desktop or Windsurf) to connect to this server, configure the client with the following settings. Replace the config path with the absolute path to your main `config.yaml`.\n    ```json\n    {\n      \"mcpServers\": {\n        \"knowledge-mcp\": {\n          \"command\": \"uvx\",\n          \"args\": [\n            \"knowledge-mcp\",\n            \"--config\",\n            \"<absolute-path-to-your-config.yaml>\",\n            \"mcp\"\n          ]\n        }\n      }\n    }\n    ```\n\n3.  **Set up configuration:**\n    *   Copy `config.example.yaml` to `config.yaml`.\n    *   Copy `.env.example` to `.env`.\n    *   Edit `config.yaml` and `.env` to add your API keys (e.g., `OPENAI_API_KEY`) and adjust paths or settings as needed. The `knowledge_base.base_dir` in `config.yaml` specifies where your knowledge base directories will be created.\n\n## 4. Configuration\n\nConfiguration is managed via YAML files:\n\n1.  **Main Configuration (`config.yaml`):** Defines global settings like the knowledge base directory (`knowledge_base.base_dir`), LightRAG parameters (LLM provider/model, embedding provider/model, API keys via `${ENV_VAR}` substitution), and logging settings. Refer to `config.example.yaml` for the full structure and available options.\n\n    ```yaml\n    knowledge_base:\n      base_dir: ./kbs\n\n    lightrag:\n      llm:\n        provider: \"openai\"\n        model_name: \"gpt-4.1-nano\"\n        api_key: \"${OPENAI_API_KEY}\"\n        # ... other LLM settings\n      embedding:\n        provider: \"openai\"\n        model_name: \"text-embedding-3-small\"\n        api_key: \"${OPENAI_API_KEY}\"\n        # ... other embedding settings\n      embedding_cache:\n        enabled: true\n        similarity_threshold: 0.90\n\n    logging:\n      level: \"INFO\"\n      # ... logging settings\n\n    env_file: .env # path to .env file\n    ```\n\n2.  **Knowledge Base Specific Configuration (`<base_dir>/<kb_name>/config.yaml`):** Contains parameters specific to querying *that* knowledge base, such as the LightRAG query `mode` (default: \"hybrid\"), `top_k` results (default: 40), context token limits, `text_only` parsing mode, and `user_prompt` for response formatting. This file is automatically created with defaults when a KB is created and can be viewed/edited using the `config` CLI command.\n\n3.  **Knowledge Base Directory Structure:** When you create knowledge bases, they are stored within the directory specified by `knowledge_base.base_dir` in your main `config.yaml`. The structure typically looks like this:\n\n    ```\n    <base_dir>/              # Main directory, contains a set of knowledge bases\n    \u251c\u2500\u2500 config.yaml          # Main application configuration (copied from config.example.yaml)\n    \u251c\u2500\u2500 .env                 # Environment variables referenced in config.yaml\n    \u251c\u2500\u2500 kbmcp.log\n    \u251c\u2500\u2500 knowledge_base_1/    # Directory for the first KB\n    \u2502   \u251c\u2500\u2500 config.yaml      # KB-specific configuration (query parameters)\n    \u2502   \u251c\u2500\u2500 <storage_files>  # The LightRAG storage files\n    \u2514\u2500\u2500 knowledge_base_2/    # Directory for the second KB\n        \u251c\u2500\u2500 config.yaml\n        \u251c\u2500\u2500 <storage_files>\n    ```\n\n## 5. New Features\n\n### 5.1 Text-Only Document Parsing\n\nBy default, knowledge-mcp processes documents using both text content and metadata (like document structure, formatting, etc.). You can now configure knowledge bases to use **text-only parsing** for faster processing and reduced token usage.\n\n**Benefits:**\n- Faster document processing\n- Lower LLM token consumption\n- Simplified content extraction\n- Better performance with large document collections\n\n**Configuration:**\nAdd `text_only: true` to your knowledge base's `config.yaml`:\n\n```yaml\n# In <base_dir>/<kb_name>/config.yaml\ndescription: \"My knowledge base with text-only parsing\"\nmode: \"hybrid\"\ntop_k: 40\ntext_only: true  # Enable text-only parsing\n```\n\n**Usage:**\n```bash\n# Create a new KB and configure it for text-only parsing\nknowledge-mcp --config config.yaml create my_text_kb\nknowledge-mcp --config config.yaml config my_text_kb edit\n# Add text_only: true to the config file\n\n# Add documents - they will be processed with text-only parsing\nknowledge-mcp --config config.yaml add my_text_kb ./documents/\n```\n\n### 5.2 Configurable User Prompts\n\nYou can now customize how the LLM formats and structures its responses for each knowledge base by configuring a `user_prompt`. This allows you to tailor the response style to match your specific use case.\n\n**Benefits:**\n- Consistent response formatting across queries\n- Domain-specific response styles\n- Better integration with downstream applications\n- Improved user experience\n\n**Configuration:**\nAdd a `user_prompt` field to your knowledge base's `config.yaml`. The prompt supports multi-line YAML syntax:\n\n```yaml\n# In <base_dir>/<kb_name>/config.yaml\ndescription: \"Technical documentation KB\"\nmode: \"hybrid\"\ntop_k: 40\nuser_prompt: |\n  Please format your response as follows:\n  \n  ## Summary\n  Provide a brief 2-3 sentence summary of the key points.\n  \n  ## Detailed Answer\n  Give a comprehensive explanation with specific details.\n  \n  ## Key Takeaways\n  - List 3-5 bullet points with the most important insights\n  - Focus on actionable information\n  \n  Keep your response clear, concise, and well-organized.\n```\n\n**Example Configurations:**\n\n1. **Business-Focused Format:**\n```yaml\nuser_prompt: |\n  Structure your response for business stakeholders:\n  \n  **Executive Summary** (2-3 sentences)\n  Brief overview of the main points and business impact.\n  \n  **Key Findings**\n  \u2022 Most critical insights\n  \u2022 Relevant metrics or data points\n  \u2022 Risk factors or opportunities\n  \n  **Recommendations**\n  \u2022 Specific actionable steps\n  \u2022 Priority levels (High/Medium/Low)\n  \u2022 Expected outcomes\n```\n\n2. **Technical Documentation Style:**\n```yaml\nuser_prompt: |\n  You are a technical documentation expert. Please structure your response with:\n  \n  1. **Context**: Brief background on the topic\n  2. **Implementation**: Step-by-step technical details\n  3. **Best Practices**: Recommended approaches and common pitfalls\n  4. **Examples**: Concrete code examples or use cases where applicable\n  \n  Use clear headings, bullet points, and code blocks for readability.\n```\n\n3. **Academic Research Style:**\n```yaml\nuser_prompt: |\n  Please provide a scholarly response that includes:\n  \n  \u2022 **Introduction**: Context and scope of the topic\n  \u2022 **Analysis**: Critical examination of key concepts and evidence\n  \u2022 **Synthesis**: How different pieces of information connect\n  \u2022 **Conclusion**: Main findings and implications\n  \n  Support your points with specific references from the knowledge base.\n```\n\n**Usage:**\n```bash\n# Configure user prompt for an existing KB\nknowledge-mcp --config config.yaml config my_kb edit\n# Add your user_prompt configuration to the YAML file\n\n# Query the KB - responses will follow your configured format\nknowledge-mcp --config config.yaml query my_kb \"What are the main concepts?\"\n```\n\n**Notes:**\n- User prompts are applied automatically to all queries for that knowledge base\n- Leave `user_prompt` empty or omit it to use default LLM behavior\n- Changes take effect immediately - no need to rebuild the knowledge base\n- Backward compatible - existing knowledge bases continue to work without modification\n\n## 6. Usage (CLI)\n\nThe primary way to interact with `knowledge-mcp` is through its CLI, accessed via the `knowledge-mcp` command (if installed globally or via `uvx knowledge-mcp` within the activated venv).\n\n**All commands require the `--config` option pointing to your main configuration file.**\n\n```bash\nuv run knowledge-mcp --config /path/to/config.yaml shell\n```\n**Available Commands (Interactive Shell):**\n\n| Command  | Description                                                                 | Arguments                                                                      |\n| :------- | :-------------------------------------------------------------------------- | :----------------------------------------------------------------------------- |\n| `create` | Creates a new knowledge base directory and initializes its structure.       | `<name>`: Name of the KB.<br> `[\"description\"]`: Optional description.         |\n| `delete` | Deletes an existing knowledge base directory and all its contents.            | `<name>`: Name of the KB to delete.                                          |\n| `list`   | Lists all available knowledge bases and their descriptions.                 | N/A                                                                            |\n| `add`    | Adds a document: processes, chunks, embeds, stores in the specified KB.     | `<kb_name>`: Target KB.<br>`<file_path>`: Path to the document file.          |\n| `remove` | Removes a document and its associated data from the KB by its ID.           | `<kb_name>`: Target KB.<br>`<doc_id>`: ID of the document to remove.         |\n| `config` | Manages the KB-specific `config.yaml`. Shows content or opens in editor.    | `<kb_name>`: Target KB.<br>`[show|edit]`: Subcommand (show default).          |\n| `query`  | Searches the specified knowledge base using LightRAG.                     | `<kb_name>`: Target KB.<br>`<query_text>`: Your search query text.             |\n| `clear`  | Clears the terminal screen.                                                 | N/A                                                                            |\n| `exit`   | Exits the interactive shell.                                                | N/A                                                                            |\n| `EOF`    | (Ctrl+D) Exits the interactive shell.                                       | N/A                                                                            |\n| `help`   | Shows available commands and their usage within the shell.                  | `[command]` (Optional command name)                                            |\n\n**Example (Direct CLI):**\n\n```bash\n# Create a knowledge base named 'my_docs'\nknowledge-mcp --config config.yaml create my_docs\n\n# Add a document to it\nknowledge-mcp --config config.yaml add my_docs ./path/to/mydocument.pdf\n\n# Search the knowledge base\nknowledge-mcp --config config.yaml query my_docs \"What is the main topic?\"\n\n# Start the interactive shell\nknowledge-mcp --config config.yaml shell\n\n(kbmcp) list\n(kbmcp) query my_docs \"Another query\"\n(kbmcp) exit\n```\n\n## 7. Development\n1. Project Decisions\n*   **Tech Stack:** Python 3.12, uv (dependency management), hatchling (build system), pytest (testing).\n*   **Setup:** Follow the installation steps, ensuring you install with `uv pip install -e \".[dev]\"`.\n*   **Code Style:** Adheres to PEP 8.\n*   **Testing:** Run tests using `uvx test` or `pytest`.\n*   **Dependencies:** Managed in `pyproject.toml`. Use `uv pip install <package>` to add and `uv pip uninstall <package>` to remove dependencies, updating `pyproject.toml` accordingly.\n*   **Scripts:** Common tasks might be defined under `[project.scripts]` in `pyproject.toml`.\n*   **Release:** Build `hatch build` and then `twine upload dist/*`.\n\n2. **Test with uvx**\n```\n    \"knowledge-mcp\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--project\",\n        \"/path/to/knowledge-mcp\",\n        \"knowledge-mcp\",\n        \"--config\",\n        \"/path/to/knowledge-mcp/kbs/config.yaml\",\n        \"mcp\"\n      ]\n    }\n```\n3. Test with MCP Inspector\n```\nnpx @modelcontextprotocol/inspector uv \"run knowledge-mcp --config /path/to/config.yaml mcp\"\n```\nor\n```\nnpx @modelcontextprotocol/inspector uvx --project . knowledge-mcp \"--config ./kbs/config.yaml mcp\n```\n4. Convenience dev scripts\nAssumes a local config file at `./kbs/config.yaml`\n* `uvx shell` - Starts the interactive shell\n* `uvx insp` - Starts the MCP Inspector",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A MCP server designed to bridge the gap between specialized knowledge domains and AI assistants.",
    "version": "0.3.0",
    "project_urls": {
        "Homepage": "https://github.com/olafgeibig/knowledge-mcp",
        "Repository": "https://github.com/olafgeibig/knowledge-mcp"
    },
    "split_keywords": [
        "cli",
        " fastmcp",
        " knowledge",
        " lightrag",
        " mcp",
        " search"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "478f419a3d30cde6f2e200ac10afd93be47d35349a262b47435a6be69de67b76",
                "md5": "bbb722918ac6e90374db5e43c59c24dc",
                "sha256": "4256d96e16925317bf9cb5344eaa4843ba8c4f76d579e2bcfb6cf6737d5d4eeb"
            },
            "downloads": -1,
            "filename": "knowledge_mcp-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bbb722918ac6e90374db5e43c59c24dc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.12",
            "size": 35711,
            "upload_time": "2025-07-12T14:42:35",
            "upload_time_iso_8601": "2025-07-12T14:42:35.182560Z",
            "url": "https://files.pythonhosted.org/packages/47/8f/419a3d30cde6f2e200ac10afd93be47d35349a262b47435a6be69de67b76/knowledge_mcp-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c9e1d9195bef2f71ff162668ec1af2bd98854227263ab91f7958438e6dff3578",
                "md5": "653880164eaa390036db51e4a2387b6f",
                "sha256": "ed3d29223c44631ab2f2ad1ed1e67190d7dd427e2b5c76f4e62f55877eb96176"
            },
            "downloads": -1,
            "filename": "knowledge_mcp-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "653880164eaa390036db51e4a2387b6f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.12",
            "size": 236131,
            "upload_time": "2025-07-12T14:42:36",
            "upload_time_iso_8601": "2025-07-12T14:42:36.929358Z",
            "url": "https://files.pythonhosted.org/packages/c9/e1/d9195bef2f71ff162668ec1af2bd98854227263ab91f7958438e6dff3578/knowledge_mcp-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-12 14:42:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "olafgeibig",
    "github_project": "knowledge-mcp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "knowledge-mcp"
}
        
Elapsed time: 0.59314s