project-quality-hub


Nameproject-quality-hub JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryProject Quality Hub: MCP-enabled project graph analysis, branch awareness, and quality scoring.
upload_time2025-11-01 08:13:22
maintainerNone
docs_urlNone
authorWangQiao
requires_python>=3.10
licenseMIT
keywords model-context-protocol mcp code-quality project-analysis ai-assistant
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Project Quality Hub

[![CI](https://github.com/WangQiao/project-quality-hub/actions/workflows/ci.yml/badge.svg)](https://github.com/WangQiao/project-quality-hub/actions/workflows/ci.yml)
[![PyPI](https://img.shields.io/pypi/v/project-quality-hub.svg)](https://pypi.org/project/project-quality-hub/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)

Model Context Protocol tooling that gives AI assistants a trustworthy project graph, branch intelligence, smart incremental updates, and explainable quality scores.

> Looking for the Chinese overview? Jump to [中文简介](#中文简介).

## Quickstart

```bash
# 1. Install the package
pip install project-quality-hub

# 2. Analyse a repository and optionally enable live monitoring
project-quality-hub analyze ./demo --monitor

# 3. Retrieve the knowledge graph summary
project-quality-hub summary ./demo

# 4. Score a project or a single file
project-quality-hub score ./demo --file src/example.py --max-files 50
```

Prefer editable installs while developing?
```bash
pip install -e .[dev]
pytest
ruff check src
```

## Why Project Quality Hub?

- **Project graph intelligence** – Build a knowledge graph with entity-level insights, dependency edges, and risk scoring so assistants can reason about codebases.
- **Branch-aware memories** – Cache per-branch analyses, switch between them, and compare git branches without rebuilding from scratch.
- **Smart incremental updates** – Watch file changes with `watchdog` to refresh analysis results in the background.
- **Quality scoring built for AI** – Blend metrics, static-analysis findings, and heuristics into transparent 0‑100 scores with actionable recommendations.
- **MCP-native experience** – Ship the same capabilities via CLI commands or an MCP stdio server for Claude, Cursor, and other compatible clients.

## CLI Essentials

```bash
# Analyse and cache a project (monitoring is opt-in)
project-quality-hub analyze /path/to/project [--force] [--monitor]

# Retrieve a summary of the analysed project
project-quality-hub summary /path/to/project

# Run quality scoring across the repo or a single file
project-quality-hub score /path/to/project [--file relative/path.py] [--max-files N]

# Control background monitoring
project-quality-hub monitor /path/to/project start|stop|status

# Launch the MCP stdio server
project-quality-hub server
project-quality-hub-server  # dedicated entry point
```

See `project-quality-hub --help` for the full command list.

## MCP Client Integration

1. Install the package on the machine hosting your MCP server.
2. Point your client at the stdio transport. Claude Desktop example:
   ```json
   {
     "endpoints": [
       {
         "name": "project-quality-hub",
         "command": ["project-quality-hub-server"],
         "transport": { "type": "stdio" }
       }
     ]
   }
   ```
3. Restart the client. The tools listed above become available instantly.
4. Need more detail? Check the full walkthrough in [`docs/integration.md`](docs/integration.md).

## Project Layout

- `src/project_quality_hub/core`: knowledge-graph modelling, multi-branch management, incremental updates.
- `src/project_quality_hub/quality`: AST inspection, static-analysis adapters, scoring heuristics.
- `src/project_quality_hub/server`: MCP stdio adapter, task orchestration, utilities.
- `src/project_quality_hub/cli.py`: CLI entry point mirroring the MCP toolset.
- `tests/`: import safety plus behavioural tests for scoring and parsing.
- `docs/`: design notes, client integration, contributing guide.

## Testing & Development

- Run unit tests with `pytest`.
- Lint with `ruff check src`; format with `black src`.
- Export `WATCHDOG_FORCE_POLLING=1` in sandboxed environments to guarantee deterministic monitoring.
- Clean build artefacts (`dist/`, `build/`) before running quality scoring for the most accurate results.

## Releases

1. Bump the version in `pyproject.toml` and `project_quality_hub/__init__.py`.
2. Build artefacts via `python -m build`.
3. Publish to PyPI using `twine upload dist/*`.
4. Tag the release, open a GitHub Release, and capture the highlights in `CHANGELOG.md`.

## Contributing

We welcome issues and pull requests! Review the [contributing guide](docs/CONTRIBUTING.md) for coding standards, workflow, and communication expectations. A behaviour code and PR templates will keep contributions friendly and consistent.

## License

Distributed under the [MIT License](LICENSE).

---

## 中文简介

`project-quality-hub` 将项目图谱、分支管理、智能增量更新和质量评分能力封装为 Model Context Protocol (MCP) 服务,方便 Claude、Cursor 等客户端直接调用。

- **项目图谱分析**:构建实体级知识图谱,输出依赖关系和风险评估。
- **多分支记忆**:缓存并比较不同 Git 分支的分析结果,快速切换。
- **实时增量更新**:结合 `watchdog` 监听文件变化,自动刷新结果。
- **质量评分**:综合指标与静态分析,提供 0-100 分的评分和优化建议。

仓库提供 CLI 与 MCP 双入口。更多集成细节请参阅 [`docs/integration.md`](docs/integration.md),贡献准则见 [`docs/CONTRIBUTING.md`](docs/CONTRIBUTING.md)。

---

Historical versions of this README remain available in [`README_EN.md`](README_EN.md).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "project-quality-hub",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "model-context-protocol, mcp, code-quality, project-analysis, ai-assistant",
    "author": "WangQiao",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "# Project Quality Hub\n\n[![CI](https://github.com/WangQiao/project-quality-hub/actions/workflows/ci.yml/badge.svg)](https://github.com/WangQiao/project-quality-hub/actions/workflows/ci.yml)\n[![PyPI](https://img.shields.io/pypi/v/project-quality-hub.svg)](https://pypi.org/project/project-quality-hub/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)\n\nModel Context Protocol tooling that gives AI assistants a trustworthy project graph, branch intelligence, smart incremental updates, and explainable quality scores.\n\n> Looking for the Chinese overview? Jump to [\u4e2d\u6587\u7b80\u4ecb](#\u4e2d\u6587\u7b80\u4ecb).\n\n## Quickstart\n\n```bash\n# 1. Install the package\npip install project-quality-hub\n\n# 2. Analyse a repository and optionally enable live monitoring\nproject-quality-hub analyze ./demo --monitor\n\n# 3. Retrieve the knowledge graph summary\nproject-quality-hub summary ./demo\n\n# 4. Score a project or a single file\nproject-quality-hub score ./demo --file src/example.py --max-files 50\n```\n\nPrefer editable installs while developing?\n```bash\npip install -e .[dev]\npytest\nruff check src\n```\n\n## Why Project Quality Hub?\n\n- **Project graph intelligence** \u2013 Build a knowledge graph with entity-level insights, dependency edges, and risk scoring so assistants can reason about codebases.\n- **Branch-aware memories** \u2013 Cache per-branch analyses, switch between them, and compare git branches without rebuilding from scratch.\n- **Smart incremental updates** \u2013 Watch file changes with `watchdog` to refresh analysis results in the background.\n- **Quality scoring built for AI** \u2013 Blend metrics, static-analysis findings, and heuristics into transparent 0\u2011100 scores with actionable recommendations.\n- **MCP-native experience** \u2013 Ship the same capabilities via CLI commands or an MCP stdio server for Claude, Cursor, and other compatible clients.\n\n## CLI Essentials\n\n```bash\n# Analyse and cache a project (monitoring is opt-in)\nproject-quality-hub analyze /path/to/project [--force] [--monitor]\n\n# Retrieve a summary of the analysed project\nproject-quality-hub summary /path/to/project\n\n# Run quality scoring across the repo or a single file\nproject-quality-hub score /path/to/project [--file relative/path.py] [--max-files N]\n\n# Control background monitoring\nproject-quality-hub monitor /path/to/project start|stop|status\n\n# Launch the MCP stdio server\nproject-quality-hub server\nproject-quality-hub-server  # dedicated entry point\n```\n\nSee `project-quality-hub --help` for the full command list.\n\n## MCP Client Integration\n\n1. Install the package on the machine hosting your MCP server.\n2. Point your client at the stdio transport. Claude Desktop example:\n   ```json\n   {\n     \"endpoints\": [\n       {\n         \"name\": \"project-quality-hub\",\n         \"command\": [\"project-quality-hub-server\"],\n         \"transport\": { \"type\": \"stdio\" }\n       }\n     ]\n   }\n   ```\n3. Restart the client. The tools listed above become available instantly.\n4. Need more detail? Check the full walkthrough in [`docs/integration.md`](docs/integration.md).\n\n## Project Layout\n\n- `src/project_quality_hub/core`: knowledge-graph modelling, multi-branch management, incremental updates.\n- `src/project_quality_hub/quality`: AST inspection, static-analysis adapters, scoring heuristics.\n- `src/project_quality_hub/server`: MCP stdio adapter, task orchestration, utilities.\n- `src/project_quality_hub/cli.py`: CLI entry point mirroring the MCP toolset.\n- `tests/`: import safety plus behavioural tests for scoring and parsing.\n- `docs/`: design notes, client integration, contributing guide.\n\n## Testing & Development\n\n- Run unit tests with `pytest`.\n- Lint with `ruff check src`; format with `black src`.\n- Export `WATCHDOG_FORCE_POLLING=1` in sandboxed environments to guarantee deterministic monitoring.\n- Clean build artefacts (`dist/`, `build/`) before running quality scoring for the most accurate results.\n\n## Releases\n\n1. Bump the version in `pyproject.toml` and `project_quality_hub/__init__.py`.\n2. Build artefacts via `python -m build`.\n3. Publish to PyPI using `twine upload dist/*`.\n4. Tag the release, open a GitHub Release, and capture the highlights in `CHANGELOG.md`.\n\n## Contributing\n\nWe welcome issues and pull requests! Review the [contributing guide](docs/CONTRIBUTING.md) for coding standards, workflow, and communication expectations. A behaviour code and PR templates will keep contributions friendly and consistent.\n\n## License\n\nDistributed under the [MIT License](LICENSE).\n\n---\n\n## \u4e2d\u6587\u7b80\u4ecb\n\n`project-quality-hub` \u5c06\u9879\u76ee\u56fe\u8c31\u3001\u5206\u652f\u7ba1\u7406\u3001\u667a\u80fd\u589e\u91cf\u66f4\u65b0\u548c\u8d28\u91cf\u8bc4\u5206\u80fd\u529b\u5c01\u88c5\u4e3a Model Context Protocol (MCP) \u670d\u52a1\uff0c\u65b9\u4fbf Claude\u3001Cursor \u7b49\u5ba2\u6237\u7aef\u76f4\u63a5\u8c03\u7528\u3002\n\n- **\u9879\u76ee\u56fe\u8c31\u5206\u6790**\uff1a\u6784\u5efa\u5b9e\u4f53\u7ea7\u77e5\u8bc6\u56fe\u8c31\uff0c\u8f93\u51fa\u4f9d\u8d56\u5173\u7cfb\u548c\u98ce\u9669\u8bc4\u4f30\u3002\n- **\u591a\u5206\u652f\u8bb0\u5fc6**\uff1a\u7f13\u5b58\u5e76\u6bd4\u8f83\u4e0d\u540c Git \u5206\u652f\u7684\u5206\u6790\u7ed3\u679c\uff0c\u5feb\u901f\u5207\u6362\u3002\n- **\u5b9e\u65f6\u589e\u91cf\u66f4\u65b0**\uff1a\u7ed3\u5408 `watchdog` \u76d1\u542c\u6587\u4ef6\u53d8\u5316\uff0c\u81ea\u52a8\u5237\u65b0\u7ed3\u679c\u3002\n- **\u8d28\u91cf\u8bc4\u5206**\uff1a\u7efc\u5408\u6307\u6807\u4e0e\u9759\u6001\u5206\u6790\uff0c\u63d0\u4f9b 0-100 \u5206\u7684\u8bc4\u5206\u548c\u4f18\u5316\u5efa\u8bae\u3002\n\n\u4ed3\u5e93\u63d0\u4f9b CLI \u4e0e MCP \u53cc\u5165\u53e3\u3002\u66f4\u591a\u96c6\u6210\u7ec6\u8282\u8bf7\u53c2\u9605 [`docs/integration.md`](docs/integration.md)\uff0c\u8d21\u732e\u51c6\u5219\u89c1 [`docs/CONTRIBUTING.md`](docs/CONTRIBUTING.md)\u3002\n\n---\n\nHistorical versions of this README remain available in [`README_EN.md`](README_EN.md).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Project Quality Hub: MCP-enabled project graph analysis, branch awareness, and quality scoring.",
    "version": "0.1.0",
    "project_urls": {
        "Documentation": "https://github.com/WangQiao/project-quality-hub/tree/main/docs",
        "Homepage": "https://github.com/WangQiao/project-quality-hub",
        "Issue Tracker": "https://github.com/WangQiao/project-quality-hub/issues",
        "Repository": "https://github.com/WangQiao/project-quality-hub"
    },
    "split_keywords": [
        "model-context-protocol",
        " mcp",
        " code-quality",
        " project-analysis",
        " ai-assistant"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "41a12ef6da216e69deda3815ed79ad2b669fa7d3ff2d3315488553b409f8551e",
                "md5": "5b2e51c1ec413f4c15f3e3446d9a1d20",
                "sha256": "24d4494f0d7cc33c02e89d062f0e3c1003e1aebe36bfa5da66809769965168da"
            },
            "downloads": -1,
            "filename": "project_quality_hub-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5b2e51c1ec413f4c15f3e3446d9a1d20",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 55659,
            "upload_time": "2025-11-01T08:13:22",
            "upload_time_iso_8601": "2025-11-01T08:13:22.718163Z",
            "url": "https://files.pythonhosted.org/packages/41/a1/2ef6da216e69deda3815ed79ad2b669fa7d3ff2d3315488553b409f8551e/project_quality_hub-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-11-01 08:13:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "WangQiao",
    "github_project": "project-quality-hub",
    "github_not_found": true,
    "lcname": "project-quality-hub"
}
        
Elapsed time: 1.70552s