# LangChain Integration Health Dashboard & Testing Framework
A comprehensive testing framework and dashboard for monitoring LangChain integration compatibility, addressing critical pain points in the LangChain ecosystem.
## Problem Statement
LangChain has major integration compatibility issues:
- Missing `.bind_tools()` methods in integrations like MLXPipeline
- Inconsistent API support across different model providers
- Poor integration testing leading to breaking changes
- No visibility into which integrations support which features
- Developers waste time discovering integration limitations at runtime
## Solution
This framework provides:
1. **Integration Compatibility Testing Framework** - Automated testing suite that validates LangChain integrations
2. **Real-time Integration Health Dashboard** - Web-based dashboard showing integration compatibility status
3. **Integration Template Generator** - Standardized templates for new integrations
## Quick Start
### Installation
#### From PyPI (Recommended)
```bash
pip install langchain-integration-health
```
#### From Source
```bash
git clone https://github.com/sadiqkhzn/langchain-integration-health.git
cd langchain-integration-health
pip install -e .
```
### Run the Dashboard
```bash
langchain-health dashboard
```
Or run Streamlit directly:
```bash
streamlit run -m langchain_integration_health.dashboard.app
```
### CLI Usage
```bash
# Discover available integrations
langchain-health discover
# Run integration tests
langchain-health test
# Launch the dashboard
langchain-health dashboard
# Generate compatibility report
langchain-health report
# Clean old test results
langchain-health clean
```
### Programmatic Usage
```python
import asyncio
from langchain_integration_health.testers import LLMIntegrationTester
from langchain_integration_health.utils.config import Config
# Test a specific integration
async def test_integration():
config = Config.from_env()
# Example: Test OpenAI integration
from langchain_openai import ChatOpenAI
tester = LLMIntegrationTester(ChatOpenAI, config.get_integration_config("ChatOpenAI"))
result = await tester.run_all_tests()
print(f"Compatibility Score: {result.compatibility_score}")
print(f"bind_tools Support: {result.bind_tools_support}")
print(f"Streaming Support: {result.streaming_support}")
asyncio.run(test_integration())
```
## Features
### Comprehensive Testing
- Tests for required methods: `bind_tools()`, `stream()`, `with_structured_output()`, etc.
- Compatibility matrix generation
- Performance benchmarking
- Error handling validation
### Real-time Dashboard
- Visual compatibility matrix with color-coded status indicators
- Detailed test results and error reporting
- Performance metrics and benchmarking data
- Historical trend analysis
- Export capabilities (JSON, CSV, Markdown)
### Automatic Discovery
- Scans installed packages for LangChain integrations
- Supports main langchain, langchain-community, and third-party packages
- Parallel testing for faster results
### Integration Fixes
- Example implementations for common issues (e.g., MLXPipeline bind_tools fix)
- Wrapper patterns for adding missing functionality
- Best practices for LangChain compatibility
## Architecture
```
langchain-integration-health/
├── src/
│ ├── testers/ # Testing framework
│ │ ├── base_tester.py
│ │ ├── llm_tester.py
│ │ ├── chat_model_tester.py
│ │ └── embeddings_tester.py
│ ├── dashboard/ # Streamlit dashboard
│ │ ├── app.py
│ │ ├── components.py
│ │ └── data_loader.py
│ ├── utils/ # Utilities
│ │ ├── config.py
│ │ ├── reporters.py
│ │ └── discovery.py
│ └── examples/ # Example implementations
│ └── mlx_pipeline_fix.py
├── .github/workflows/ # CI/CD integration
└── tests/ # Test suite
```
## Testing Framework
### Base Classes
- `BaseIntegrationTester`: Abstract base for all integration testers
- `LLMIntegrationTester`: Specialized for LLM integrations
- `ChatModelTester`: Specialized for chat models
- `EmbeddingsTester`: Specialized for embedding models
### Test Types
1. **Method Existence Tests**: Verify required methods are present
2. **Functionality Tests**: Test actual method execution
3. **Error Handling Tests**: Validate proper error handling
4. **Performance Tests**: Measure latency and throughput
5. **Compatibility Tests**: Check version compatibility
## Dashboard Features
### Compatibility Matrix
Color-coded grid showing which integrations support which features:
- Green: High compatibility (>=0.8)
- Yellow: Medium compatibility (0.5-0.8)
- Red: Low compatibility (<0.5)
### Integration Details
Expandable sections showing:
- Full test results
- Error and warning details
- Performance metrics
- Historical data
### Export Options
- JSON: Structured data for programmatic use
- CSV: Spreadsheet-compatible format
- Markdown: Human-readable reports
## Configuration
### Environment Variables
```bash
# Database
LIH_DATABASE_URL=sqlite:///integration_health.db
# Testing
LIH_TEST_TIMEOUT=30
LIH_PARALLEL_TESTS=true
LIH_MOCK_MODE=false
# Dashboard
LIH_DASHBOARD_HOST=localhost
LIH_DASHBOARD_PORT=8501
# API Keys (optional, for real testing)
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here
```
### Configuration File
```json
{
"database_url": "sqlite:///integration_health.db",
"test_timeout": 30,
"parallel_tests": true,
"mock_mode": false,
"api_keys": {
"openai": "your_key_here",
"anthropic": "your_key_here"
}
}
```
## CI/CD Integration
The framework includes GitHub Actions workflow for automated testing:
```yaml
# .github/workflows/integration-tests.yml
# Runs daily and on PRs to test all integrations
```
### Features:
- Automatic integration discovery
- Parallel testing across integrations
- Report generation and artifact upload
- PR comment integration
## MLXPipeline Fix Example
The framework includes a complete example of how to fix the MLXPipeline `bind_tools` issue:
```python
from langchain_integration_health.examples.mlx_pipeline_fix import create_mlx_wrapper
# Original MLXPipeline (missing bind_tools)
mlx = MLXPipeline(model_name="mlx-community/Llama-3.2-1B-Instruct-4bit")
# Wrap for LangChain compatibility
langchain_mlx = create_mlx_wrapper(mlx)
# Now you can use bind_tools!
from langchain.tools import tool
@tool
def calculator(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
mlx_with_tools = langchain_mlx.bind_tools([calculator])
response = mlx_with_tools.invoke("What is 5 + 3?")
```
## API Reference
### Testing Classes
#### `BaseIntegrationTester`
Base class for all integration testers.
```python
class BaseIntegrationTester:
def __init__(self, integration_class: Type, config: Optional[Dict] = None)
async def run_all_tests(self) -> IntegrationTestResult
def _test_required_methods(self) -> Dict[str, bool]
```
#### `LLMIntegrationTester`
Specialized tester for LLM integrations.
```python
class LLMIntegrationTester(BaseIntegrationTester):
REQUIRED_METHODS = ['invoke', 'ainvoke', 'stream', 'astream', 'bind_tools', 'with_structured_output']
async def _test_bind_tools_support(self) -> bool
async def _test_streaming_support(self) -> bool
```
### Data Models
#### `IntegrationTestResult`
Stores test results for an integration.
```python
@dataclass
class IntegrationTestResult:
integration_name: str
integration_version: str
test_timestamp: datetime
bind_tools_support: bool
streaming_support: bool
structured_output_support: bool
async_support: bool
errors: List[str]
warnings: List[str]
performance_metrics: Dict[str, float]
compatibility_score: float
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for your changes
4. Ensure all tests pass
5. Submit a pull request
### Development Setup
```bash
git clone https://github.com/sadiqkhzn/langchain-integration-health.git
cd langchain-integration-health
pip install -e ".[dev]"
pytest tests/
```
## License
MIT License - see LICENSE file for details.
## Support
For issues and feature requests, please use the GitHub issue tracker.
Raw data
{
"_id": null,
"home_page": null,
"name": "langchain-integration-health",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Sadiq Khan <sadiqkhzn@example.com>",
"keywords": "langchain, testing, integration, compatibility, dashboard",
"author": "LangChain Integration Health Team",
"author_email": "Sadiq Khan <sadiqkhzn@example.com>",
"download_url": "https://files.pythonhosted.org/packages/37/bf/10690af647c5c0f0bcf34c627b4bdbcf8e2b42430f1b406661fc2df5f45f/langchain_integration_health-0.1.3.tar.gz",
"platform": null,
"description": "# LangChain Integration Health Dashboard & Testing Framework\n\nA comprehensive testing framework and dashboard for monitoring LangChain integration compatibility, addressing critical pain points in the LangChain ecosystem.\n\n## Problem Statement\n\nLangChain has major integration compatibility issues:\n- Missing `.bind_tools()` methods in integrations like MLXPipeline\n- Inconsistent API support across different model providers\n- Poor integration testing leading to breaking changes\n- No visibility into which integrations support which features\n- Developers waste time discovering integration limitations at runtime\n\n## Solution\n\nThis framework provides:\n\n1. **Integration Compatibility Testing Framework** - Automated testing suite that validates LangChain integrations\n2. **Real-time Integration Health Dashboard** - Web-based dashboard showing integration compatibility status\n3. **Integration Template Generator** - Standardized templates for new integrations\n\n## Quick Start\n\n### Installation\n\n#### From PyPI (Recommended)\n\n```bash\npip install langchain-integration-health\n```\n\n#### From Source\n\n```bash\ngit clone https://github.com/sadiqkhzn/langchain-integration-health.git\ncd langchain-integration-health\npip install -e .\n```\n\n### Run the Dashboard\n\n```bash\nlangchain-health dashboard\n```\n\nOr run Streamlit directly:\n\n```bash\nstreamlit run -m langchain_integration_health.dashboard.app\n```\n\n### CLI Usage\n\n```bash\n# Discover available integrations\nlangchain-health discover\n\n# Run integration tests\nlangchain-health test\n\n# Launch the dashboard\nlangchain-health dashboard\n\n# Generate compatibility report\nlangchain-health report\n\n# Clean old test results\nlangchain-health clean\n```\n\n### Programmatic Usage\n\n```python\nimport asyncio\nfrom langchain_integration_health.testers import LLMIntegrationTester\nfrom langchain_integration_health.utils.config import Config\n\n# Test a specific integration\nasync def test_integration():\n config = Config.from_env()\n \n # Example: Test OpenAI integration\n from langchain_openai import ChatOpenAI\n \n tester = LLMIntegrationTester(ChatOpenAI, config.get_integration_config(\"ChatOpenAI\"))\n result = await tester.run_all_tests()\n \n print(f\"Compatibility Score: {result.compatibility_score}\")\n print(f\"bind_tools Support: {result.bind_tools_support}\")\n print(f\"Streaming Support: {result.streaming_support}\")\n\nasyncio.run(test_integration())\n```\n\n## Features\n\n### Comprehensive Testing\n- Tests for required methods: `bind_tools()`, `stream()`, `with_structured_output()`, etc.\n- Compatibility matrix generation\n- Performance benchmarking\n- Error handling validation\n\n### Real-time Dashboard\n- Visual compatibility matrix with color-coded status indicators\n- Detailed test results and error reporting\n- Performance metrics and benchmarking data\n- Historical trend analysis\n- Export capabilities (JSON, CSV, Markdown)\n\n### Automatic Discovery\n- Scans installed packages for LangChain integrations\n- Supports main langchain, langchain-community, and third-party packages\n- Parallel testing for faster results\n\n### Integration Fixes\n- Example implementations for common issues (e.g., MLXPipeline bind_tools fix)\n- Wrapper patterns for adding missing functionality\n- Best practices for LangChain compatibility\n\n## Architecture\n\n```\nlangchain-integration-health/\n\u251c\u2500\u2500 src/\n\u2502 \u251c\u2500\u2500 testers/ # Testing framework\n\u2502 \u2502 \u251c\u2500\u2500 base_tester.py\n\u2502 \u2502 \u251c\u2500\u2500 llm_tester.py\n\u2502 \u2502 \u251c\u2500\u2500 chat_model_tester.py\n\u2502 \u2502 \u2514\u2500\u2500 embeddings_tester.py\n\u2502 \u251c\u2500\u2500 dashboard/ # Streamlit dashboard\n\u2502 \u2502 \u251c\u2500\u2500 app.py\n\u2502 \u2502 \u251c\u2500\u2500 components.py\n\u2502 \u2502 \u2514\u2500\u2500 data_loader.py\n\u2502 \u251c\u2500\u2500 utils/ # Utilities\n\u2502 \u2502 \u251c\u2500\u2500 config.py\n\u2502 \u2502 \u251c\u2500\u2500 reporters.py\n\u2502 \u2502 \u2514\u2500\u2500 discovery.py\n\u2502 \u2514\u2500\u2500 examples/ # Example implementations\n\u2502 \u2514\u2500\u2500 mlx_pipeline_fix.py\n\u251c\u2500\u2500 .github/workflows/ # CI/CD integration\n\u2514\u2500\u2500 tests/ # Test suite\n```\n\n## Testing Framework\n\n### Base Classes\n\n- `BaseIntegrationTester`: Abstract base for all integration testers\n- `LLMIntegrationTester`: Specialized for LLM integrations\n- `ChatModelTester`: Specialized for chat models\n- `EmbeddingsTester`: Specialized for embedding models\n\n### Test Types\n\n1. **Method Existence Tests**: Verify required methods are present\n2. **Functionality Tests**: Test actual method execution\n3. **Error Handling Tests**: Validate proper error handling\n4. **Performance Tests**: Measure latency and throughput\n5. **Compatibility Tests**: Check version compatibility\n\n## Dashboard Features\n\n### Compatibility Matrix\nColor-coded grid showing which integrations support which features:\n- Green: High compatibility (>=0.8)\n- Yellow: Medium compatibility (0.5-0.8)\n- Red: Low compatibility (<0.5)\n\n### Integration Details\nExpandable sections showing:\n- Full test results\n- Error and warning details\n- Performance metrics\n- Historical data\n\n### Export Options\n- JSON: Structured data for programmatic use\n- CSV: Spreadsheet-compatible format\n- Markdown: Human-readable reports\n\n## Configuration\n\n### Environment Variables\n\n```bash\n# Database\nLIH_DATABASE_URL=sqlite:///integration_health.db\n\n# Testing\nLIH_TEST_TIMEOUT=30\nLIH_PARALLEL_TESTS=true\nLIH_MOCK_MODE=false\n\n# Dashboard\nLIH_DASHBOARD_HOST=localhost\nLIH_DASHBOARD_PORT=8501\n\n# API Keys (optional, for real testing)\nOPENAI_API_KEY=your_key_here\nANTHROPIC_API_KEY=your_key_here\nGOOGLE_API_KEY=your_key_here\n```\n\n### Configuration File\n\n```json\n{\n \"database_url\": \"sqlite:///integration_health.db\",\n \"test_timeout\": 30,\n \"parallel_tests\": true,\n \"mock_mode\": false,\n \"api_keys\": {\n \"openai\": \"your_key_here\",\n \"anthropic\": \"your_key_here\"\n }\n}\n```\n\n## CI/CD Integration\n\nThe framework includes GitHub Actions workflow for automated testing:\n\n```yaml\n# .github/workflows/integration-tests.yml\n# Runs daily and on PRs to test all integrations\n```\n\n### Features:\n- Automatic integration discovery\n- Parallel testing across integrations\n- Report generation and artifact upload\n- PR comment integration\n\n## MLXPipeline Fix Example\n\nThe framework includes a complete example of how to fix the MLXPipeline `bind_tools` issue:\n\n```python\nfrom langchain_integration_health.examples.mlx_pipeline_fix import create_mlx_wrapper\n\n# Original MLXPipeline (missing bind_tools)\nmlx = MLXPipeline(model_name=\"mlx-community/Llama-3.2-1B-Instruct-4bit\")\n\n# Wrap for LangChain compatibility\nlangchain_mlx = create_mlx_wrapper(mlx)\n\n# Now you can use bind_tools!\nfrom langchain.tools import tool\n\n@tool\ndef calculator(a: int, b: int) -> int:\n \"\"\"Add two numbers\"\"\"\n return a + b\n\nmlx_with_tools = langchain_mlx.bind_tools([calculator])\nresponse = mlx_with_tools.invoke(\"What is 5 + 3?\")\n```\n\n## API Reference\n\n### Testing Classes\n\n#### `BaseIntegrationTester`\nBase class for all integration testers.\n\n```python\nclass BaseIntegrationTester:\n def __init__(self, integration_class: Type, config: Optional[Dict] = None)\n async def run_all_tests(self) -> IntegrationTestResult\n def _test_required_methods(self) -> Dict[str, bool]\n```\n\n#### `LLMIntegrationTester`\nSpecialized tester for LLM integrations.\n\n```python\nclass LLMIntegrationTester(BaseIntegrationTester):\n REQUIRED_METHODS = ['invoke', 'ainvoke', 'stream', 'astream', 'bind_tools', 'with_structured_output']\n \n async def _test_bind_tools_support(self) -> bool\n async def _test_streaming_support(self) -> bool\n```\n\n### Data Models\n\n#### `IntegrationTestResult`\nStores test results for an integration.\n\n```python\n@dataclass\nclass IntegrationTestResult:\n integration_name: str\n integration_version: str\n test_timestamp: datetime\n bind_tools_support: bool\n streaming_support: bool\n structured_output_support: bool\n async_support: bool\n errors: List[str]\n warnings: List[str]\n performance_metrics: Dict[str, float]\n compatibility_score: float\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for your changes\n4. Ensure all tests pass\n5. Submit a pull request\n\n### Development Setup\n\n```bash\ngit clone https://github.com/sadiqkhzn/langchain-integration-health.git\ncd langchain-integration-health\npip install -e \".[dev]\"\npytest tests/\n```\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Support\n\nFor issues and feature requests, please use the GitHub issue tracker.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Comprehensive testing framework and dashboard for LangChain integration compatibility",
"version": "0.1.3",
"project_urls": {
"Documentation": "https://github.com/sadiqkhzn/langchain-integration-health#readme",
"Homepage": "https://github.com/sadiqkhzn/langchain-integration-health",
"Issues": "https://github.com/sadiqkhzn/langchain-integration-health/issues",
"Repository": "https://github.com/sadiqkhzn/langchain-integration-health"
},
"split_keywords": [
"langchain",
" testing",
" integration",
" compatibility",
" dashboard"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "907c40410d2e67aa8b612f2bf03209aa02f3b9e207c6f5add493b49d455a866b",
"md5": "e0291b494162205b888049997a3adfb3",
"sha256": "a90193ebd9406aa990fbab6d18889d0558992d68349406ce0ac14c50f3609846"
},
"downloads": -1,
"filename": "langchain_integration_health-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e0291b494162205b888049997a3adfb3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 32934,
"upload_time": "2025-09-02T19:15:51",
"upload_time_iso_8601": "2025-09-02T19:15:51.142369Z",
"url": "https://files.pythonhosted.org/packages/90/7c/40410d2e67aa8b612f2bf03209aa02f3b9e207c6f5add493b49d455a866b/langchain_integration_health-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "37bf10690af647c5c0f0bcf34c627b4bdbcf8e2b42430f1b406661fc2df5f45f",
"md5": "70e5ac5e037da6c3e3ea7816ca6b4415",
"sha256": "9c56ab198dbab66c770a1bc1600d73bf8269f28b2cdf94c4628b7593314ed6c0"
},
"downloads": -1,
"filename": "langchain_integration_health-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "70e5ac5e037da6c3e3ea7816ca6b4415",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 32285,
"upload_time": "2025-09-02T19:15:52",
"upload_time_iso_8601": "2025-09-02T19:15:52.948557Z",
"url": "https://files.pythonhosted.org/packages/37/bf/10690af647c5c0f0bcf34c627b4bdbcf8e2b42430f1b406661fc2df5f45f/langchain_integration_health-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-02 19:15:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sadiqkhzn",
"github_project": "langchain-integration-health#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "langchain",
"specs": [
[
">=",
"0.1.0"
]
]
},
{
"name": "langchain-core",
"specs": [
[
">=",
"0.1.0"
]
]
},
{
"name": "streamlit",
"specs": [
[
">=",
"1.28.0"
]
]
},
{
"name": "fastapi",
"specs": [
[
">=",
"0.104.0"
]
]
},
{
"name": "uvicorn",
"specs": [
[
">=",
"0.24.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "plotly",
"specs": [
[
">=",
"5.17.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"7.4.0"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
">=",
"0.21.0"
]
]
},
{
"name": "aiohttp",
"specs": [
[
">=",
"3.9.0"
]
]
},
{
"name": "sqlalchemy",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "alembic",
"specs": [
[
">=",
"1.12.0"
]
]
},
{
"name": "typer",
"specs": [
[
">=",
"0.9.0"
]
]
},
{
"name": "rich",
"specs": [
[
">=",
"13.6.0"
]
]
}
],
"lcname": "langchain-integration-health"
}