cortana-agent


Namecortana-agent JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryCortana - A Helpful AI Assistant with MCP support, image search, and code execution capabilities
upload_time2025-07-25 21:35:41
maintainerNone
docs_urlNone
authorNone
requires_python>=3.13
licenseMIT
keywords ai assistant google-ai mcp openai pydantic-ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Cortana - AI Assistant with MCP Support

A powerful AI assistant built with pydantic-ai, featuring MCP (Model Context Protocol) server integration, Google tools, code execution capabilities, and advanced memory management.

## ๐Ÿš€ Features

- **Multi-LLM Support**: Compatible with Google's Gemini and OpenAI models
- **MCP Server Integration**: Connect to external tools and services via MCP protocol
- **Google Tools**: Image search and code execution capabilities
- **Memory Management**: Automatic conversation summarization for long sessions
- **Media Support**: Handle audio, images, and PDF files
- **Async/Await**: Full asynchronous support for better performance
- **Extensible**: Easy to add custom tools and integrations

## ๐Ÿ“ฆ Installation

### Using UV (Recommended)

```bash
# Clone the repository
git clone <repository-url>
cd Cortana

# Install using UV
uv sync
```

### Using pip

```bash
pip install -e .
```

### Using pip with requirements.txt

```bash
pip install -r requirements.txt
```

## ๐Ÿ”ง Dependencies

- **pydantic-ai >= 0.4.0**: Core AI framework
- **tavily-python >= 0.5.1**: Web search capabilities
- **ipykernel >= 6.30.0**: Jupyter notebook support

## ๐Ÿš€ Quick Start

### Basic Usage

```python
import asyncio
from cortana.cortana_agent import Cortana_agent
from pydantic_ai.models.google import GoogleModel
from pydantic_ai.providers.google import GoogleProvider

# Initialize with Google Gemini
llm = GoogleModel('gemini-2.5-flash', provider=GoogleProvider(api_key="your-api-key"))
cortana = Cortana_agent(llm=llm)

# Simple chat
async def main():
    async with cortana:
        response = await cortana.chat(["Hello, what can you help me with?"])
        print(f"UI Version: {response.ui_version}")
        print(f"Voice Version: {response.voice_version}")

asyncio.run(main())
```

### With OpenAI

```python
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider

llm = OpenAIModel('gpt-4-mini', provider=OpenAIProvider(api_key="your-openai-key"))
cortana = Cortana_agent(llm=llm)
```

## ๐Ÿ› ๏ธ Configuration Options

### Cortana_agent Parameters

```python
cortana = Cortana_agent(
    llm=your_llm,                              # Required: pydantic-ai compatible model
    tools=[],                                  # Optional: List of custom tools
    mcp_servers=[],                            # Optional: List of MCP servers
    summarizer=False,                          # Optional: Enable conversation summarization
    custom_summarizer_agent=None,              # Optional: Custom summarizer agent
    memory_length=20,                          # Optional: Messages before summarization
    memory_summarizer_length=15                # Optional: Messages to summarize
)
```

## ๐Ÿ”— MCP Server Integration

### Adding MCP Servers

```python
from cortana.utils.helper_functions import MCP_server_helper
from pydantic_ai.mcp import MCPServerStreamableHTTP, MCPServerSSE, MCPServerStdio

# Using helper class
mcp_helper = MCP_server_helper()
mcp_helper.add_mpc_server(type='http', mpc_server_url='https://mcp.notion.com/mcp')
mcp_helper.add_mpc_server(type='sse', mpc_server_url='https://mcp.notion.com/sse')
mcp_helper.add_mpc_server(type='stdio', command='npx', args=['-y', 'mcp-remote', 'https://mcp.notion.com/mcp'])

# Initialize Cortana with MCP servers
cortana = Cortana_agent(llm=llm, mcp_servers=mcp_helper.get_mpc_servers())
```

### Direct MCP Server Setup

```python
mcp_servers = [
    MCPServerStreamableHTTP(url='https://mcp.notion.com/mcp', headers=None),
    MCPServerSSE(url='https://mcp.notion.com/sse', headers=None),
    MCPServerStdio(command='npx', args=['-y', 'mcp-remote', 'https://mcp.notion.com/mcp'], env=None)
]

cortana = Cortana_agent(llm=llm, mcp_servers=mcp_servers)
```

## ๐Ÿ› ๏ธ Google Tools Integration

### Image Search Tool

```python
from cortana.PrebuiltTools.google_tools import search_images_tool

# Setup image search
image_tool = search_images_tool(
    api_key="your-google-api-key",
    search_engine_id="your-custom-search-engine-id"
)

cortana = Cortana_agent(llm=llm, tools=[image_tool])

# Usage
response = await cortana.chat(["Find me an image of a sunset"])
```

### Code Execution Tool

```python
from cortana.PrebuiltTools.google_tools import code_execution_tool

# Setup code execution
code_tool = code_execution_tool(api_key="your-gemini-api-key")

cortana = Cortana_agent(llm=llm, tools=[code_tool])

# Usage
response = await cortana.chat(["Calculate the factorial of 10 using Python"])
```

### Combined Tools Example

```python
tools = [
    search_images_tool(api_key=google_api_key, search_engine_id=search_engine_id),
    code_execution_tool(api_key=google_api_key)
]

cortana = Cortana_agent(llm=llm, tools=tools)
```

## ๐Ÿ’พ Memory Management

### Enable Automatic Summarization

```python
cortana = Cortana_agent(
    llm=llm,
    summarizer=True,                    # Enable summarization
    memory_length=20,                   # Summarize after 20 messages
    memory_summarizer_length=15         # Summarize oldest 15 messages
)
```

### Custom Summarizer Agent

```python
from pydantic_ai import Agent

custom_summarizer = Agent(
    llm, 
    instructions='Create detailed technical summaries focusing on code and solutions.'
)

cortana = Cortana_agent(
    llm=llm,
    summarizer=True,
    custom_summarizer_agent=custom_summarizer
)
```

### Accessing Memory and State

```python
# Access conversation history
messages = cortana.memory.messages

# Access agent state
deps = cortana.deps
user_name = cortana.deps.user
agents_output = cortana.deps.agents_output

# Reset memory
cortana.reset()
```

## ๐Ÿ“ฑ Media Support

### Text Input

```python
response = await cortana.chat(["What's the weather like today?"])
```

### Image Input

```python
from pydantic_ai.messages import BinaryContent

# From file
with open("image.png", "rb") as f:
    image_data = f.read()

response = await cortana.chat([
    "What do you see in this image?",
    BinaryContent(data=image_data, media_type='image/png')
])
```

### Audio Input

```python
# Audio file
with open("audio.wav", "rb") as f:
    audio_data = f.read()

response = await cortana.chat([
    "Transcribe this audio",
    BinaryContent(data=audio_data, media_type='audio/wav')
])
```

### PDF Input

```python
# PDF file
with open("document.pdf", "rb") as f:
    pdf_data = f.read()

response = await cortana.chat([
    "Summarize this document",
    BinaryContent(data=pdf_data, media_type='application/pdf')
])
```

## ๐Ÿ”ง Advanced Usage

### Context Manager (Recommended)

```python
async def main():
    async with Cortana_agent(llm=llm, mcp_servers=mcp_servers) as cortana:
        # MCP servers are automatically connected
        response = await cortana.chat(["Help me with my Notion workspace"])
        print(response.ui_version)
        # MCP servers are automatically disconnected
```

### Manual Connection Management

```python
cortana = Cortana_agent(llm=llm, mcp_servers=mcp_servers)

# Connect manually
await cortana.connect()

try:
    response = await cortana.chat(["Hello"])
finally:
    # Disconnect manually
    await cortana.disconnect()
```

### Custom Tools

```python
from pydantic_ai.tools import Tool

def custom_weather_tool(location: str) -> str:
    """Get weather information for a location"""
    # Your weather API logic here
    return f"Weather in {location}: Sunny, 25ยฐC"

weather_tool = Tool(
    custom_weather_tool,
    name='get_weather',
    description='Get current weather for any location'
)

cortana = Cortana_agent(llm=llm, tools=[weather_tool])
```

## ๐Ÿ“ Complete Example

```python
import asyncio
import os
from dotenv import load_dotenv

from cortana.cortana_agent import Cortana_agent
from cortana.utils.helper_functions import MCP_server_helper
from cortana.PrebuiltTools.google_tools import search_images_tool, code_execution_tool

from pydantic_ai.models.google import GoogleModel
from pydantic_ai.providers.google import GoogleProvider
from pydantic_ai.messages import BinaryContent

# Load environment variables
load_dotenv()

async def main():
    # Setup LLM
    llm = GoogleModel('gemini-2.5-flash', 
                     provider=GoogleProvider(api_key=os.getenv('GOOGLE_API_KEY')))
    
    # Setup MCP servers
    mcp_helper = MCP_server_helper()
    mcp_helper.add_mpc_server(type='stdio', command='npx', 
                             args=['-y', '@modelcontextprotocol/server-filesystem', '/tmp'])
    
    # Setup tools
    tools = [
        search_images_tool(
            api_key=os.getenv('GOOGLE_API_KEY'),
            search_engine_id=os.getenv('GOOGLE_SEARCH_ENGINE_ID')
        ),
        code_execution_tool(api_key=os.getenv('GOOGLE_API_KEY'))
    ]
    
    # Initialize Cortana
    cortana = Cortana_agent(
        llm=llm,
        tools=tools,
        mcp_servers=mcp_helper.get_mpc_servers(),
        summarizer=True,
        memory_length=20
    )
    
    # Use context manager for automatic connection handling
    async with cortana:
        # Set user name
        cortana.deps.user = "Alice"
        
        # Text conversation
        response = await cortana.chat(["Hello Cortana, what can you help me with?"])
        print("Cortana:", response.voice_version)
        
        # Math problem with code execution
        response = await cortana.chat(["Calculate the sum of squares from 1 to 100"])
        print("Math Result:", response.ui_version)
        
        # Image search
        response = await cortana.chat(["Find me an image of a beautiful landscape"])
        print("Image Search:", response.ui_version)
        
        # Check conversation history
        print(f"Total messages in memory: {len(cortana.memory.messages)}")

if __name__ == "__main__":
    asyncio.run(main())
```

## ๐Ÿงช Testing

Run the included Jupyter notebooks to test different features:

- `notebooks/cortana_test.ipynb`: Basic functionality testing
- `notebooks/cort_mcp_test.ipynb`: MCP server integration testing
- `notebooks/cortana_voice_test.ipynb`: Voice/audio capabilities testing
- `notebooks/memory_handling.ipynb`: Memory management testing

## ๐Ÿ”‘ Environment Variables

Create a `.env` file in your project root:

```env
GOOGLE_API_KEY=your_google_api_key
GOOGLE_SEARCH_ENGINE_ID=your_custom_search_engine_id
OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key
```

## ๐Ÿค Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request

## ๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

## ๐Ÿ†˜ Support

For issues and questions:
1. Check the notebooks in the `notebooks/` directory for examples
2. Review the docstrings in the source code
3. Open an issue on GitHub

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "cortana-agent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.13",
    "maintainer_email": null,
    "keywords": "ai, assistant, google-ai, mcp, openai, pydantic-ai",
    "author": null,
    "author_email": "Tristan Padiou <padioutristan@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/f5/f6/7e4ab374b4c12e536723739d408431b3a347d4d6bee888f42e0863f11526/cortana_agent-0.1.1.tar.gz",
    "platform": null,
    "description": "# Cortana - AI Assistant with MCP Support\n\nA powerful AI assistant built with pydantic-ai, featuring MCP (Model Context Protocol) server integration, Google tools, code execution capabilities, and advanced memory management.\n\n## \ud83d\ude80 Features\n\n- **Multi-LLM Support**: Compatible with Google's Gemini and OpenAI models\n- **MCP Server Integration**: Connect to external tools and services via MCP protocol\n- **Google Tools**: Image search and code execution capabilities\n- **Memory Management**: Automatic conversation summarization for long sessions\n- **Media Support**: Handle audio, images, and PDF files\n- **Async/Await**: Full asynchronous support for better performance\n- **Extensible**: Easy to add custom tools and integrations\n\n## \ud83d\udce6 Installation\n\n### Using UV (Recommended)\n\n```bash\n# Clone the repository\ngit clone <repository-url>\ncd Cortana\n\n# Install using UV\nuv sync\n```\n\n### Using pip\n\n```bash\npip install -e .\n```\n\n### Using pip with requirements.txt\n\n```bash\npip install -r requirements.txt\n```\n\n## \ud83d\udd27 Dependencies\n\n- **pydantic-ai >= 0.4.0**: Core AI framework\n- **tavily-python >= 0.5.1**: Web search capabilities\n- **ipykernel >= 6.30.0**: Jupyter notebook support\n\n## \ud83d\ude80 Quick Start\n\n### Basic Usage\n\n```python\nimport asyncio\nfrom cortana.cortana_agent import Cortana_agent\nfrom pydantic_ai.models.google import GoogleModel\nfrom pydantic_ai.providers.google import GoogleProvider\n\n# Initialize with Google Gemini\nllm = GoogleModel('gemini-2.5-flash', provider=GoogleProvider(api_key=\"your-api-key\"))\ncortana = Cortana_agent(llm=llm)\n\n# Simple chat\nasync def main():\n    async with cortana:\n        response = await cortana.chat([\"Hello, what can you help me with?\"])\n        print(f\"UI Version: {response.ui_version}\")\n        print(f\"Voice Version: {response.voice_version}\")\n\nasyncio.run(main())\n```\n\n### With OpenAI\n\n```python\nfrom pydantic_ai.models.openai import OpenAIModel\nfrom pydantic_ai.providers.openai import OpenAIProvider\n\nllm = OpenAIModel('gpt-4-mini', provider=OpenAIProvider(api_key=\"your-openai-key\"))\ncortana = Cortana_agent(llm=llm)\n```\n\n## \ud83d\udee0\ufe0f Configuration Options\n\n### Cortana_agent Parameters\n\n```python\ncortana = Cortana_agent(\n    llm=your_llm,                              # Required: pydantic-ai compatible model\n    tools=[],                                  # Optional: List of custom tools\n    mcp_servers=[],                            # Optional: List of MCP servers\n    summarizer=False,                          # Optional: Enable conversation summarization\n    custom_summarizer_agent=None,              # Optional: Custom summarizer agent\n    memory_length=20,                          # Optional: Messages before summarization\n    memory_summarizer_length=15                # Optional: Messages to summarize\n)\n```\n\n## \ud83d\udd17 MCP Server Integration\n\n### Adding MCP Servers\n\n```python\nfrom cortana.utils.helper_functions import MCP_server_helper\nfrom pydantic_ai.mcp import MCPServerStreamableHTTP, MCPServerSSE, MCPServerStdio\n\n# Using helper class\nmcp_helper = MCP_server_helper()\nmcp_helper.add_mpc_server(type='http', mpc_server_url='https://mcp.notion.com/mcp')\nmcp_helper.add_mpc_server(type='sse', mpc_server_url='https://mcp.notion.com/sse')\nmcp_helper.add_mpc_server(type='stdio', command='npx', args=['-y', 'mcp-remote', 'https://mcp.notion.com/mcp'])\n\n# Initialize Cortana with MCP servers\ncortana = Cortana_agent(llm=llm, mcp_servers=mcp_helper.get_mpc_servers())\n```\n\n### Direct MCP Server Setup\n\n```python\nmcp_servers = [\n    MCPServerStreamableHTTP(url='https://mcp.notion.com/mcp', headers=None),\n    MCPServerSSE(url='https://mcp.notion.com/sse', headers=None),\n    MCPServerStdio(command='npx', args=['-y', 'mcp-remote', 'https://mcp.notion.com/mcp'], env=None)\n]\n\ncortana = Cortana_agent(llm=llm, mcp_servers=mcp_servers)\n```\n\n## \ud83d\udee0\ufe0f Google Tools Integration\n\n### Image Search Tool\n\n```python\nfrom cortana.PrebuiltTools.google_tools import search_images_tool\n\n# Setup image search\nimage_tool = search_images_tool(\n    api_key=\"your-google-api-key\",\n    search_engine_id=\"your-custom-search-engine-id\"\n)\n\ncortana = Cortana_agent(llm=llm, tools=[image_tool])\n\n# Usage\nresponse = await cortana.chat([\"Find me an image of a sunset\"])\n```\n\n### Code Execution Tool\n\n```python\nfrom cortana.PrebuiltTools.google_tools import code_execution_tool\n\n# Setup code execution\ncode_tool = code_execution_tool(api_key=\"your-gemini-api-key\")\n\ncortana = Cortana_agent(llm=llm, tools=[code_tool])\n\n# Usage\nresponse = await cortana.chat([\"Calculate the factorial of 10 using Python\"])\n```\n\n### Combined Tools Example\n\n```python\ntools = [\n    search_images_tool(api_key=google_api_key, search_engine_id=search_engine_id),\n    code_execution_tool(api_key=google_api_key)\n]\n\ncortana = Cortana_agent(llm=llm, tools=tools)\n```\n\n## \ud83d\udcbe Memory Management\n\n### Enable Automatic Summarization\n\n```python\ncortana = Cortana_agent(\n    llm=llm,\n    summarizer=True,                    # Enable summarization\n    memory_length=20,                   # Summarize after 20 messages\n    memory_summarizer_length=15         # Summarize oldest 15 messages\n)\n```\n\n### Custom Summarizer Agent\n\n```python\nfrom pydantic_ai import Agent\n\ncustom_summarizer = Agent(\n    llm, \n    instructions='Create detailed technical summaries focusing on code and solutions.'\n)\n\ncortana = Cortana_agent(\n    llm=llm,\n    summarizer=True,\n    custom_summarizer_agent=custom_summarizer\n)\n```\n\n### Accessing Memory and State\n\n```python\n# Access conversation history\nmessages = cortana.memory.messages\n\n# Access agent state\ndeps = cortana.deps\nuser_name = cortana.deps.user\nagents_output = cortana.deps.agents_output\n\n# Reset memory\ncortana.reset()\n```\n\n## \ud83d\udcf1 Media Support\n\n### Text Input\n\n```python\nresponse = await cortana.chat([\"What's the weather like today?\"])\n```\n\n### Image Input\n\n```python\nfrom pydantic_ai.messages import BinaryContent\n\n# From file\nwith open(\"image.png\", \"rb\") as f:\n    image_data = f.read()\n\nresponse = await cortana.chat([\n    \"What do you see in this image?\",\n    BinaryContent(data=image_data, media_type='image/png')\n])\n```\n\n### Audio Input\n\n```python\n# Audio file\nwith open(\"audio.wav\", \"rb\") as f:\n    audio_data = f.read()\n\nresponse = await cortana.chat([\n    \"Transcribe this audio\",\n    BinaryContent(data=audio_data, media_type='audio/wav')\n])\n```\n\n### PDF Input\n\n```python\n# PDF file\nwith open(\"document.pdf\", \"rb\") as f:\n    pdf_data = f.read()\n\nresponse = await cortana.chat([\n    \"Summarize this document\",\n    BinaryContent(data=pdf_data, media_type='application/pdf')\n])\n```\n\n## \ud83d\udd27 Advanced Usage\n\n### Context Manager (Recommended)\n\n```python\nasync def main():\n    async with Cortana_agent(llm=llm, mcp_servers=mcp_servers) as cortana:\n        # MCP servers are automatically connected\n        response = await cortana.chat([\"Help me with my Notion workspace\"])\n        print(response.ui_version)\n        # MCP servers are automatically disconnected\n```\n\n### Manual Connection Management\n\n```python\ncortana = Cortana_agent(llm=llm, mcp_servers=mcp_servers)\n\n# Connect manually\nawait cortana.connect()\n\ntry:\n    response = await cortana.chat([\"Hello\"])\nfinally:\n    # Disconnect manually\n    await cortana.disconnect()\n```\n\n### Custom Tools\n\n```python\nfrom pydantic_ai.tools import Tool\n\ndef custom_weather_tool(location: str) -> str:\n    \"\"\"Get weather information for a location\"\"\"\n    # Your weather API logic here\n    return f\"Weather in {location}: Sunny, 25\u00b0C\"\n\nweather_tool = Tool(\n    custom_weather_tool,\n    name='get_weather',\n    description='Get current weather for any location'\n)\n\ncortana = Cortana_agent(llm=llm, tools=[weather_tool])\n```\n\n## \ud83d\udcdd Complete Example\n\n```python\nimport asyncio\nimport os\nfrom dotenv import load_dotenv\n\nfrom cortana.cortana_agent import Cortana_agent\nfrom cortana.utils.helper_functions import MCP_server_helper\nfrom cortana.PrebuiltTools.google_tools import search_images_tool, code_execution_tool\n\nfrom pydantic_ai.models.google import GoogleModel\nfrom pydantic_ai.providers.google import GoogleProvider\nfrom pydantic_ai.messages import BinaryContent\n\n# Load environment variables\nload_dotenv()\n\nasync def main():\n    # Setup LLM\n    llm = GoogleModel('gemini-2.5-flash', \n                     provider=GoogleProvider(api_key=os.getenv('GOOGLE_API_KEY')))\n    \n    # Setup MCP servers\n    mcp_helper = MCP_server_helper()\n    mcp_helper.add_mpc_server(type='stdio', command='npx', \n                             args=['-y', '@modelcontextprotocol/server-filesystem', '/tmp'])\n    \n    # Setup tools\n    tools = [\n        search_images_tool(\n            api_key=os.getenv('GOOGLE_API_KEY'),\n            search_engine_id=os.getenv('GOOGLE_SEARCH_ENGINE_ID')\n        ),\n        code_execution_tool(api_key=os.getenv('GOOGLE_API_KEY'))\n    ]\n    \n    # Initialize Cortana\n    cortana = Cortana_agent(\n        llm=llm,\n        tools=tools,\n        mcp_servers=mcp_helper.get_mpc_servers(),\n        summarizer=True,\n        memory_length=20\n    )\n    \n    # Use context manager for automatic connection handling\n    async with cortana:\n        # Set user name\n        cortana.deps.user = \"Alice\"\n        \n        # Text conversation\n        response = await cortana.chat([\"Hello Cortana, what can you help me with?\"])\n        print(\"Cortana:\", response.voice_version)\n        \n        # Math problem with code execution\n        response = await cortana.chat([\"Calculate the sum of squares from 1 to 100\"])\n        print(\"Math Result:\", response.ui_version)\n        \n        # Image search\n        response = await cortana.chat([\"Find me an image of a beautiful landscape\"])\n        print(\"Image Search:\", response.ui_version)\n        \n        # Check conversation history\n        print(f\"Total messages in memory: {len(cortana.memory.messages)}\")\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## \ud83e\uddea Testing\n\nRun the included Jupyter notebooks to test different features:\n\n- `notebooks/cortana_test.ipynb`: Basic functionality testing\n- `notebooks/cort_mcp_test.ipynb`: MCP server integration testing\n- `notebooks/cortana_voice_test.ipynb`: Voice/audio capabilities testing\n- `notebooks/memory_handling.ipynb`: Memory management testing\n\n## \ud83d\udd11 Environment Variables\n\nCreate a `.env` file in your project root:\n\n```env\nGOOGLE_API_KEY=your_google_api_key\nGOOGLE_SEARCH_ENGINE_ID=your_custom_search_engine_id\nOPENAI_API_KEY=your_openai_api_key\nTAVILY_API_KEY=your_tavily_api_key\n```\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests if applicable\n5. Submit a pull request\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## \ud83c\udd98 Support\n\nFor issues and questions:\n1. Check the notebooks in the `notebooks/` directory for examples\n2. Review the docstrings in the source code\n3. Open an issue on GitHub\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Cortana - A Helpful AI Assistant with MCP support, image search, and code execution capabilities",
    "version": "0.1.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/yourusername/cortana-agent/issues",
        "Documentation": "https://github.com/yourusername/cortana-agent#readme",
        "Homepage": "https://github.com/yourusername/cortana-agent",
        "Repository": "https://github.com/yourusername/cortana-agent"
    },
    "split_keywords": [
        "ai",
        " assistant",
        " google-ai",
        " mcp",
        " openai",
        " pydantic-ai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9abe69f712059fd8d5e1d10bf61e6ab03be7f8954acaa024c27c5b8f42c5dff4",
                "md5": "aca19c691eeac4c0b25bde7109f99927",
                "sha256": "00a39ff4aafd5547e0960b7a726122a4fa56e5e70584549285a24c9c2daa0e82"
            },
            "downloads": -1,
            "filename": "cortana_agent-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "aca19c691eeac4c0b25bde7109f99927",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.13",
            "size": 10277,
            "upload_time": "2025-07-25T21:35:40",
            "upload_time_iso_8601": "2025-07-25T21:35:40.478211Z",
            "url": "https://files.pythonhosted.org/packages/9a/be/69f712059fd8d5e1d10bf61e6ab03be7f8954acaa024c27c5b8f42c5dff4/cortana_agent-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f5f67e4ab374b4c12e536723739d408431b3a347d4d6bee888f42e0863f11526",
                "md5": "3c570cf8f80d2842e3412a9e563c2c55",
                "sha256": "5f5a8403ae5e43f58f24f727f40da525971cd2a5ee43fd06f948a400b4875c68"
            },
            "downloads": -1,
            "filename": "cortana_agent-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "3c570cf8f80d2842e3412a9e563c2c55",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.13",
            "size": 403695,
            "upload_time": "2025-07-25T21:35:41",
            "upload_time_iso_8601": "2025-07-25T21:35:41.962439Z",
            "url": "https://files.pythonhosted.org/packages/f5/f6/7e4ab374b4c12e536723739d408431b3a347d4d6bee888f42e0863f11526/cortana_agent-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-25 21:35:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yourusername",
    "github_project": "cortana-agent",
    "github_not_found": true,
    "lcname": "cortana-agent"
}
        
Elapsed time: 0.95876s