# Semantic Kernel Agent Factory
A comprehensive SDK for creating and managing AI agents powered by Microsoft Semantic Kernel with MCP (Model Context Protocol) server integration. Build sophisticated conversational agents with tool integration, deploy them as web services, or interact with them through a rich terminal interface.
## Features
- 🤖 **Agent Factory**: Create and manage multiple Semantic Kernel-based agents with different configurations
- 🔗 **MCP Integration**: Connect agents to external tools via Model Context Protocol (stdio and streamable HTTP servers)
- 🖥️ **Interactive Console**: Rich terminal-based chat interface with multi-agent support powered by Textual
- 🌐 **Web Service Factory**: Deploy agents as HTTP/REST APIs with A2A (Agent-to-Agent) protocol support
- ⚡ **Streaming Support**: Real-time response streaming for both console and web interfaces
- 📊 **Health Monitoring**: Built-in MCP server health checks and status monitoring
- 🔧 **Flexible Configuration**: YAML-based configuration with environment variable support
- 🎯 **Structured Outputs**: Support for JSON schema-based response formatting
## Installation
### Basic Installation
```bash
# Install core functionality only
pip install semantic-kernel-agent-factory
```
### Installation with Optional Features
```bash
# For console/CLI interface
pip install semantic-kernel-agent-factory[console]
# For web service deployment
pip install semantic-kernel-agent-factory[service]
# For development (includes testing, linting, and type checking tools)
pip install semantic-kernel-agent-factory[dev]
# For documentation generation
pip install semantic-kernel-agent-factory[docs]
# Install all optional features
pip install semantic-kernel-agent-factory[all]
```
### Development Installation
For local development:
```bash
# Clone the repository
git clone https://github.com/jhzhu89/semantic-kernel-agent-factory
cd semantic-kernel-agent-factory
# Install in editable mode with development dependencies only
pip install -e ".[dev]"
# OR install with all features for comprehensive development/testing
pip install -e ".[dev-all]"
# Use the Makefile for quick setup
make install-dev # Basic development setup
make install-dev-all # Development setup with all features
```
## Quick Start
### 1. Console Application
Create a configuration file `config.yaml`:
```yaml
agent_factory:
agents:
GeneralAssistant:
name: "GeneralAssistant"
instructions: |
You are a helpful AI assistant.
Answer questions clearly and concisely.
model: "gpt-4"
model_settings:
temperature: 0.7
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
```
Run the interactive console:
```bash
# Note: Requires console dependencies
# Install with: pip install semantic-kernel-agent-factory[console]
agent-factory -c config.yaml
```
### 2. Python API - Agent Factory
```python
import asyncio
from agent_factory import AgentFactory, AgentFactoryConfig, AgentConfig
async def main():
# Create configuration
config = AgentFactoryConfig(
agents={
"assistant": AgentConfig(
name="assistant",
instructions="You are a helpful AI assistant",
model="gpt-4"
)
},
openai_models={
"gpt-4": {
"model": "gpt-4",
"api_key": "your-api-key",
"endpoint": "your-endpoint"
}
}
)
# Create and use agents
async with AgentFactory(config) as factory:
agent = factory.get_agent("assistant")
# Use the agent for conversations
asyncio.run(main())
```
### 3. Web Service Deployment
Create a service configuration `service_config.yaml`:
```yaml
agent_factory:
agents:
ChatBot:
name: "ChatBot"
instructions: "You are a helpful chatbot"
model: "gpt-4"
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
service_factory:
services:
ChatBot:
card:
name: "ChatBot"
description: "AI-powered chatbot service"
enable_token_streaming: true
```
Deploy as web service:
```python
# Note: Requires service dependencies
# Install with: pip install semantic-kernel-agent-factory[service]
from agent_factory import AgentServiceFactory, AgentServiceFactoryConfig
import uvicorn
async def create_app():
config = AgentServiceFactoryConfig.from_file("service_config.yaml")
async with AgentServiceFactory(config) as factory:
app = await factory.create_application()
return app
if __name__ == "__main__":
uvicorn.run("main:create_app", host="0.0.0.0", port=8000)
```
## MCP Server Integration
Connect agents to external tools using Model Context Protocol servers:
```yaml
agent_factory:
agents:
ToolAgent:
name: "ToolAgent"
instructions: "You have access to various tools"
model: "gpt-4"
mcp_servers: ["time", "kubernetes"]
mcp:
servers:
time:
type: "stdio"
command: "python"
args: ["-m", "mcp_server_time"]
kubernetes:
type: "streamable_http"
url: "https://k8s-mcp-server.example.com/mcp"
timeout: 10
```
### Access Token Authentication
For HTTP-based MCP servers that require authentication, there's a limitation with the MCP Python client SDK. While it's easy to obtain user access tokens in HTTP services, the underlying HTTP client utilization doesn't provide a straightforward way to add tokens to HTTP headers when communicating with MCP servers.
**Workaround: Filter-based Token Injection**
As a workaround, the system uses filters to inject access tokens before sending requests to MCP servers. Check out filters.py for details.
**Important Notes:**
- The server should consume the `access_token` for authentication purposes
- **Do not** include `access_token` in the tool's input schema definition
- The token is automatically injected by the filter before the request reaches the MCP server
- This is a temporary workaround until the MCP Python client SDK provides better header customization support
## Console Features
The interactive console provides:
- **Multi-Agent Chat**: Switch between different agents in tabbed interface
- **Real-time Streaming**: See responses as they're generated
- **MCP Status Monitoring**: Live health checks of connected MCP servers
- **Function Call Visibility**: See tool calls and results in real-time
- **Keyboard shortcuts**:
- `Ctrl+Enter`: Send message
- `Ctrl+L`: Clear chat
- `F1`: Toggle agent panel
- `F2`: Toggle logs
- `Ctrl+W`: Close tab
## Configuration
### Agent Configuration
```yaml
agent_factory:
agents:
MyAgent:
name: "MyAgent"
instructions: "System prompt for the agent"
model: "gpt-4"
model_settings:
temperature: 0.7
max_tokens: 2000
response_json_schema: # Optional structured output
type: "object"
properties:
answer:
type: "string"
mcp:
servers: ["tool1", "tool2"]
```
### OpenAI Models
```yaml
agent_factory:
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
gpt-3.5-turbo:
model: "gpt-3.5-turbo"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
```
### MCP Server Types
**Stdio servers** (local processes):
```yaml
mcp:
servers:
local_tool:
type: "stdio"
command: "python"
args: ["-m", "my_mcp_server"]
env:
DEBUG: "true"
```
**Streamable HTTP servers** (HTTP-based):
```yaml
mcp:
servers:
remote_tool:
type: "streamable_http"
url: "https://api.example.com/mcp"
timeout: 15
```
## CLI Commands
```bash
# Start interactive chat (requires console dependencies)
agent-factory -c config.yaml
# List configured agents
agent-factory list -c config.yaml
# Enable verbose logging
agent-factory -c config.yaml --verbose
# Custom log directory
agent-factory -c config.yaml --log-dir /path/to/logs
```
## Environment Variables
Configure using environment variables:
```bash
# OpenAI Configuration
export OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
# Optional: Agent Factory Settings
export AGENT_FACTORY__MODEL_SELECTION="cost" # first, cost, latency, quality
export AGENT_FACTORY__MCP_FAILURE_STRATEGY="lenient" # strict, lenient
```
## Examples
See the `examples/` directory for:
- [`cli_example.yaml`](examples/cli_example.yaml) - Console application setup
- [`agent_service_factory_config.yaml`](examples/agent_service_factory_config.yaml) - Web service configuration
- [`web_service.py`](examples/web_service.py) - Web service deployment example
## Development
### Quick Setup
```bash
# Clone repository
git clone https://github.com/jhzhu89/semantic-kernel-agent-factory
cd semantic-kernel-agent-factory
# Install in editable mode with all development dependencies
pip install -e ".[dev]"
```
### Available Development Tools
The `[dev]` extra includes:
- **Testing**: pytest, pytest-asyncio, pytest-cov, pytest-mock
- **Code Formatting**: black, isort, ruff
- **Type Checking**: mypy with type stubs
- **Linting**: flake8, ruff
- **Coverage**: pytest-cov for test coverage reports
### Development Commands
```bash
# Run tests
pytest
# Run tests with coverage
pytest --cov=agent_factory --cov-report=html
# Format code
black .
isort .
# Lint code
ruff check .
flake8 .
# Type checking
mypy agent_factory
# Run all quality checks
make test-cov # Runs tests with coverage
make format # Formats code
make type-check # Type checking
```
### Optional Development Features
```bash
# Install with console dependencies for development
pip install -e ".[dev,console]"
# For web service development
pip install -e ".[dev,service]"
# Install all features for development
pip install -e ".[dev,all]"
```
## Architecture
The Semantic Kernel Agent Factory consists of several key components:
- **AgentFactory**: Core factory for creating and managing Semantic Kernel agents
- **AgentServiceFactory**: Web service wrapper that exposes agents as HTTP APIs (requires `[service]` extra)
- **MCPProvider**: Manages connections to Model Context Protocol servers
- **Console Application**: Terminal-based interface for interactive agent chat (requires `[console]` extra)
- **Configuration System**: YAML-based configuration with validation
### Optional Components
Different installation options enable additional features:
- **`[console]`**: Interactive terminal interface with Textual UI, Click CLI commands
- **`[service]`**: A2A-based web services, Starlette server support
- **`[docs]`**: Sphinx-based documentation generation
- **`[dev]`**: Development tools for testing, linting, and type checking
## Requirements
- Python 3.10+
- Microsoft Semantic Kernel
- Azure OpenAI or OpenAI API access
- Optional: MCP-compatible tool servers
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Install development dependencies: `pip install -e ".[dev]"`
4. Add tests for new functionality
5. Run the test suite and ensure all checks pass:
```bash
# Run tests
pytest
# Format code
black .
isort .
# Lint code
ruff check .
flake8 .
# Type checking
mypy agent_factory
```
6. Submit a pull request
### Development Environment Setup
```bash
# Install with console dependencies for development
pip install -e ".[dev,console]"
# For web service development
pip install -e ".[dev,service]"
# For full development environment
pip install -e ".[dev,all]"
```
### Project Structure
- `agent_factory/` - Core library code
- `tests/` - Test suite
- `examples/` - Usage examples
- `docs/` - Documentation (when using `[docs]` extra)
### Code Quality Standards
This project uses:
- **Black** for code formatting
- **isort** for import sorting
- **Ruff** for linting
- **Flake8** for additional linting
- **mypy** for type checking
- **pytest** for testing with >80% coverage requirement
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Support & Documentation
- 📖 [GitHub Repository](https://github.com/jhzhu89/semantic-kernel-agent-factory)
- 🐛 [Issue Tracker](https://github.com/jhzhu89/semantic-kernel-agent-factory/issues)
- 💬 [Discussions](https://github.com/jhzhu89/semantic-kernel-agent-factory/discussions)
- 📚 [Examples](https://github.com/jhzhu89/semantic-kernel-agent-factory/tree/main/examples)
## Related Projects
- [Microsoft Semantic Kernel](https://github.com/microsoft/semantic-kernel)
- [Model Context Protocol](https://modelcontextprotocol.io/)
- [Textual](https://github.com/Textualize/textual) - Powers the console interface
Raw data
{
"_id": null,
"home_page": null,
"name": "semantic-kernel-agent-factory",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "semantic-kernel, agent, mcp, server, factory, sdk, service, streamable_http, http",
"author": null,
"author_email": "Jiahao Zhu <jiahzhu@outlook.com>",
"download_url": null,
"platform": null,
"description": "# Semantic Kernel Agent Factory\n\nA comprehensive SDK for creating and managing AI agents powered by Microsoft Semantic Kernel with MCP (Model Context Protocol) server integration. Build sophisticated conversational agents with tool integration, deploy them as web services, or interact with them through a rich terminal interface.\n\n## Features\n\n- \ud83e\udd16 **Agent Factory**: Create and manage multiple Semantic Kernel-based agents with different configurations\n- \ud83d\udd17 **MCP Integration**: Connect agents to external tools via Model Context Protocol (stdio and streamable HTTP servers)\n- \ud83d\udda5\ufe0f **Interactive Console**: Rich terminal-based chat interface with multi-agent support powered by Textual\n- \ud83c\udf10 **Web Service Factory**: Deploy agents as HTTP/REST APIs with A2A (Agent-to-Agent) protocol support\n- \u26a1 **Streaming Support**: Real-time response streaming for both console and web interfaces\n- \ud83d\udcca **Health Monitoring**: Built-in MCP server health checks and status monitoring\n- \ud83d\udd27 **Flexible Configuration**: YAML-based configuration with environment variable support\n- \ud83c\udfaf **Structured Outputs**: Support for JSON schema-based response formatting\n\n## Installation\n\n### Basic Installation\n\n```bash\n# Install core functionality only\npip install semantic-kernel-agent-factory\n```\n\n### Installation with Optional Features\n\n```bash\n# For console/CLI interface\npip install semantic-kernel-agent-factory[console]\n\n# For web service deployment\npip install semantic-kernel-agent-factory[service]\n\n# For development (includes testing, linting, and type checking tools)\npip install semantic-kernel-agent-factory[dev]\n\n# For documentation generation\npip install semantic-kernel-agent-factory[docs]\n\n# Install all optional features\npip install semantic-kernel-agent-factory[all]\n```\n\n### Development Installation\n\nFor local development:\n\n```bash\n# Clone the repository\ngit clone https://github.com/jhzhu89/semantic-kernel-agent-factory\ncd semantic-kernel-agent-factory\n\n# Install in editable mode with development dependencies only\npip install -e \".[dev]\"\n\n# OR install with all features for comprehensive development/testing\npip install -e \".[dev-all]\"\n\n# Use the Makefile for quick setup\nmake install-dev # Basic development setup\nmake install-dev-all # Development setup with all features\n```\n\n## Quick Start\n\n### 1. Console Application\n\nCreate a configuration file `config.yaml`:\n\n```yaml\nagent_factory:\n agents:\n GeneralAssistant:\n name: \"GeneralAssistant\"\n instructions: |\n You are a helpful AI assistant.\n Answer questions clearly and concisely.\n model: \"gpt-4\"\n model_settings:\n temperature: 0.7\n\n openai_models:\n gpt-4:\n model: \"gpt-4\"\n api_key: \"${OPENAI_API_KEY}\"\n endpoint: \"${AZURE_OPENAI_ENDPOINT}\"\n```\n\nRun the interactive console:\n\n```bash\n# Note: Requires console dependencies\n# Install with: pip install semantic-kernel-agent-factory[console]\nagent-factory -c config.yaml\n```\n\n### 2. Python API - Agent Factory\n\n```python\nimport asyncio\nfrom agent_factory import AgentFactory, AgentFactoryConfig, AgentConfig\n\nasync def main():\n # Create configuration\n config = AgentFactoryConfig(\n agents={\n \"assistant\": AgentConfig(\n name=\"assistant\",\n instructions=\"You are a helpful AI assistant\",\n model=\"gpt-4\"\n )\n },\n openai_models={\n \"gpt-4\": {\n \"model\": \"gpt-4\",\n \"api_key\": \"your-api-key\",\n \"endpoint\": \"your-endpoint\"\n }\n }\n )\n \n # Create and use agents\n async with AgentFactory(config) as factory:\n agent = factory.get_agent(\"assistant\")\n # Use the agent for conversations\n \nasyncio.run(main())\n```\n\n### 3. Web Service Deployment\n\nCreate a service configuration `service_config.yaml`:\n\n```yaml\nagent_factory:\n agents:\n ChatBot:\n name: \"ChatBot\"\n instructions: \"You are a helpful chatbot\"\n model: \"gpt-4\"\n \n openai_models:\n gpt-4:\n model: \"gpt-4\"\n api_key: \"${OPENAI_API_KEY}\"\n endpoint: \"${AZURE_OPENAI_ENDPOINT}\"\n\nservice_factory:\n services:\n ChatBot:\n card:\n name: \"ChatBot\"\n description: \"AI-powered chatbot service\"\n enable_token_streaming: true\n```\n\nDeploy as web service:\n\n```python\n# Note: Requires service dependencies\n# Install with: pip install semantic-kernel-agent-factory[service]\nfrom agent_factory import AgentServiceFactory, AgentServiceFactoryConfig\nimport uvicorn\n\nasync def create_app():\n config = AgentServiceFactoryConfig.from_file(\"service_config.yaml\")\n \n async with AgentServiceFactory(config) as factory:\n app = await factory.create_application()\n return app\n\nif __name__ == \"__main__\":\n uvicorn.run(\"main:create_app\", host=\"0.0.0.0\", port=8000)\n```\n\n## MCP Server Integration\n\nConnect agents to external tools using Model Context Protocol servers:\n\n```yaml\nagent_factory:\n agents:\n ToolAgent:\n name: \"ToolAgent\"\n instructions: \"You have access to various tools\"\n model: \"gpt-4\"\n mcp_servers: [\"time\", \"kubernetes\"]\n \n mcp:\n servers:\n time:\n type: \"stdio\"\n command: \"python\"\n args: [\"-m\", \"mcp_server_time\"]\n \n kubernetes:\n type: \"streamable_http\"\n url: \"https://k8s-mcp-server.example.com/mcp\"\n timeout: 10\n```\n\n### Access Token Authentication\n\nFor HTTP-based MCP servers that require authentication, there's a limitation with the MCP Python client SDK. While it's easy to obtain user access tokens in HTTP services, the underlying HTTP client utilization doesn't provide a straightforward way to add tokens to HTTP headers when communicating with MCP servers.\n\n**Workaround: Filter-based Token Injection**\n\nAs a workaround, the system uses filters to inject access tokens before sending requests to MCP servers. Check out filters.py for details.\n\n**Important Notes:**\n- The server should consume the `access_token` for authentication purposes\n- **Do not** include `access_token` in the tool's input schema definition\n- The token is automatically injected by the filter before the request reaches the MCP server\n- This is a temporary workaround until the MCP Python client SDK provides better header customization support\n\n## Console Features\n\nThe interactive console provides:\n\n- **Multi-Agent Chat**: Switch between different agents in tabbed interface\n- **Real-time Streaming**: See responses as they're generated\n- **MCP Status Monitoring**: Live health checks of connected MCP servers\n- **Function Call Visibility**: See tool calls and results in real-time\n- **Keyboard shortcuts**:\n - `Ctrl+Enter`: Send message\n - `Ctrl+L`: Clear chat\n - `F1`: Toggle agent panel\n - `F2`: Toggle logs\n - `Ctrl+W`: Close tab\n\n## Configuration\n\n### Agent Configuration\n\n```yaml\nagent_factory:\n agents:\n MyAgent:\n name: \"MyAgent\"\n instructions: \"System prompt for the agent\"\n model: \"gpt-4\"\n model_settings:\n temperature: 0.7\n max_tokens: 2000\n response_json_schema: # Optional structured output\n type: \"object\"\n properties:\n answer: \n type: \"string\"\n mcp:\n servers: [\"tool1\", \"tool2\"]\n```\n\n### OpenAI Models\n\n```yaml\nagent_factory:\n openai_models:\n gpt-4:\n model: \"gpt-4\"\n api_key: \"${OPENAI_API_KEY}\"\n endpoint: \"${AZURE_OPENAI_ENDPOINT}\"\n \n gpt-3.5-turbo:\n model: \"gpt-3.5-turbo\"\n api_key: \"${OPENAI_API_KEY}\"\n endpoint: \"${AZURE_OPENAI_ENDPOINT}\"\n```\n\n### MCP Server Types\n\n**Stdio servers** (local processes):\n```yaml\nmcp:\n servers:\n local_tool:\n type: \"stdio\"\n command: \"python\"\n args: [\"-m\", \"my_mcp_server\"]\n env:\n DEBUG: \"true\"\n```\n\n**Streamable HTTP servers** (HTTP-based):\n```yaml\nmcp:\n servers:\n remote_tool:\n type: \"streamable_http\"\n url: \"https://api.example.com/mcp\"\n timeout: 15\n```\n\n## CLI Commands\n\n```bash\n# Start interactive chat (requires console dependencies)\nagent-factory -c config.yaml\n\n# List configured agents\nagent-factory list -c config.yaml\n\n# Enable verbose logging\nagent-factory -c config.yaml --verbose\n\n# Custom log directory\nagent-factory -c config.yaml --log-dir /path/to/logs\n```\n\n## Environment Variables\n\nConfigure using environment variables:\n\n```bash\n# OpenAI Configuration\nexport OPENAI_API_KEY=\"your-api-key\"\nexport AZURE_OPENAI_ENDPOINT=\"https://your-resource.openai.azure.com/\"\n\n# Optional: Agent Factory Settings\nexport AGENT_FACTORY__MODEL_SELECTION=\"cost\" # first, cost, latency, quality\nexport AGENT_FACTORY__MCP_FAILURE_STRATEGY=\"lenient\" # strict, lenient\n```\n\n## Examples\n\nSee the `examples/` directory for:\n- [`cli_example.yaml`](examples/cli_example.yaml) - Console application setup\n- [`agent_service_factory_config.yaml`](examples/agent_service_factory_config.yaml) - Web service configuration\n- [`web_service.py`](examples/web_service.py) - Web service deployment example\n\n## Development\n\n### Quick Setup\n\n```bash\n# Clone repository\ngit clone https://github.com/jhzhu89/semantic-kernel-agent-factory\ncd semantic-kernel-agent-factory\n\n# Install in editable mode with all development dependencies\npip install -e \".[dev]\"\n```\n\n### Available Development Tools\n\nThe `[dev]` extra includes:\n- **Testing**: pytest, pytest-asyncio, pytest-cov, pytest-mock\n- **Code Formatting**: black, isort, ruff\n- **Type Checking**: mypy with type stubs\n- **Linting**: flake8, ruff\n- **Coverage**: pytest-cov for test coverage reports\n\n### Development Commands\n\n```bash\n# Run tests\npytest\n\n# Run tests with coverage\npytest --cov=agent_factory --cov-report=html\n\n# Format code\nblack .\nisort .\n\n# Lint code\nruff check .\nflake8 .\n\n# Type checking\nmypy agent_factory\n\n# Run all quality checks\nmake test-cov # Runs tests with coverage\nmake format # Formats code\nmake type-check # Type checking\n```\n\n### Optional Development Features\n\n```bash\n# Install with console dependencies for development\npip install -e \".[dev,console]\"\n\n# For web service development\npip install -e \".[dev,service]\"\n\n# Install all features for development\npip install -e \".[dev,all]\"\n```\n\n## Architecture\n\nThe Semantic Kernel Agent Factory consists of several key components:\n\n- **AgentFactory**: Core factory for creating and managing Semantic Kernel agents\n- **AgentServiceFactory**: Web service wrapper that exposes agents as HTTP APIs (requires `[service]` extra)\n- **MCPProvider**: Manages connections to Model Context Protocol servers\n- **Console Application**: Terminal-based interface for interactive agent chat (requires `[console]` extra)\n- **Configuration System**: YAML-based configuration with validation\n\n### Optional Components\n\nDifferent installation options enable additional features:\n\n- **`[console]`**: Interactive terminal interface with Textual UI, Click CLI commands\n- **`[service]`**: A2A-based web services, Starlette server support\n- **`[docs]`**: Sphinx-based documentation generation\n- **`[dev]`**: Development tools for testing, linting, and type checking\n\n## Requirements\n\n- Python 3.10+\n- Microsoft Semantic Kernel\n- Azure OpenAI or OpenAI API access\n- Optional: MCP-compatible tool servers\n\n## Contributing\n\nContributions are welcome! Please:\n\n1. Fork the repository\n2. Create a feature branch\n3. Install development dependencies: `pip install -e \".[dev]\"`\n4. Add tests for new functionality\n5. Run the test suite and ensure all checks pass:\n ```bash\n # Run tests\n pytest\n \n # Format code\n black .\n isort .\n \n # Lint code\n ruff check .\n flake8 .\n \n # Type checking\n mypy agent_factory\n ```\n6. Submit a pull request\n\n### Development Environment Setup\n\n```bash\n# Install with console dependencies for development\npip install -e \".[dev,console]\"\n\n# For web service development\npip install -e \".[dev,service]\"\n\n# For full development environment\npip install -e \".[dev,all]\"\n```\n\n### Project Structure\n\n- `agent_factory/` - Core library code\n- `tests/` - Test suite\n- `examples/` - Usage examples\n- `docs/` - Documentation (when using `[docs]` extra)\n\n### Code Quality Standards\n\nThis project uses:\n- **Black** for code formatting\n- **isort** for import sorting \n- **Ruff** for linting\n- **Flake8** for additional linting\n- **mypy** for type checking\n- **pytest** for testing with >80% coverage requirement\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Support & Documentation\n\n- \ud83d\udcd6 [GitHub Repository](https://github.com/jhzhu89/semantic-kernel-agent-factory)\n- \ud83d\udc1b [Issue Tracker](https://github.com/jhzhu89/semantic-kernel-agent-factory/issues)\n- \ud83d\udcac [Discussions](https://github.com/jhzhu89/semantic-kernel-agent-factory/discussions)\n- \ud83d\udcda [Examples](https://github.com/jhzhu89/semantic-kernel-agent-factory/tree/main/examples)\n\n## Related Projects\n\n- [Microsoft Semantic Kernel](https://github.com/microsoft/semantic-kernel)\n- [Model Context Protocol](https://modelcontextprotocol.io/)\n- [Textual](https://github.com/Textualize/textual) - Powers the console interface\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "SDK for creating and managing OpenAI agents with MCP server integration",
"version": "0.0.8",
"project_urls": {
"Documentation": "https://github.com/jhzhu89/semantic-kernel-agent-factory/tree/main/docs",
"Issues": "https://github.com/jhzhu89/semantic-kernel-agent-factory/issues",
"Repository": "https://github.com/jhzhu89/semantic-kernel-agent-factory"
},
"split_keywords": [
"semantic-kernel",
" agent",
" mcp",
" server",
" factory",
" sdk",
" service",
" streamable_http",
" http"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "05ee4fb09b1e186d3ef292af96d952f5c2a190e56018154ce13d36ad9f2843b0",
"md5": "42c1ce0c56c6f53fec272cd7f8100ab7",
"sha256": "29ef3c63b4dc729d40e75e61267e9c8c2e549efb4e651a0f3147cb49624ca346"
},
"downloads": -1,
"filename": "semantic_kernel_agent_factory-0.0.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "42c1ce0c56c6f53fec272cd7f8100ab7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 54994,
"upload_time": "2025-07-25T07:14:03",
"upload_time_iso_8601": "2025-07-25T07:14:03.133282Z",
"url": "https://files.pythonhosted.org/packages/05/ee/4fb09b1e186d3ef292af96d952f5c2a190e56018154ce13d36ad9f2843b0/semantic_kernel_agent_factory-0.0.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-25 07:14:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jhzhu89",
"github_project": "semantic-kernel-agent-factory",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "semantic-kernel-agent-factory"
}