vibecore


Namevibecore JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryBuild your own AI-powered automation tools in the terminal with this extensible agent framework
upload_time2025-08-14 04:18:13
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT
keywords agents ai anthropic assistant automation claude cli framework gpt mcp model-context-protocol openai terminal textual tool-use tui
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # vibecore

<div align="center">

[![PyPI version](https://badge.fury.io/py/vibecore.svg)](https://badge.fury.io/py/vibecore)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![PyPI downloads](https://img.shields.io/pypi/dm/vibecore.svg)](https://pypistats.org/packages/vibecore)
[![Code style: ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)
[![Checked with pyright](https://microsoft.github.io/pyright/img/pyright_badge.svg)](https://microsoft.github.io/pyright/)

**Build your own AI-powered automation tools in the terminal with this extensible agent framework**

[Features](#features) " [Installation](#installation) " [Usage](#usage) " [Development](#development) " [Contributing](#contributing)

</div>

---

<p align="center" style="max-width: 800px; margin: 0 auto;">
    <img src="docs/images/screenshot.png" alt="vibecore terminal screenshot" style="max-width: 100%; height: auto;">
</p>

## Overview

vibecore is a **Do-it-yourself Agent Framework** that transforms your terminal into a powerful AI workspace. More than just a chat interface, it's a complete platform for building and orchestrating custom AI agents that can manipulate files, execute code, run shell commands, and manage complex workflows—all from the comfort of your terminal.

Built on [Textual](https://textual.textualize.io/) and the [OpenAI Agents SDK](https://github.com/openai/openai-agents-python), vibecore provides the foundation for creating your own AI-powered automation tools. Whether you're automating development workflows, building custom AI assistants, or experimenting with agent-based systems, vibecore gives you the building blocks to craft exactly what you need.

### Key Features

- **AI-Powered Chat Interface** - Interact with state-of-the-art language models through an intuitive terminal interface
- **Rich Tool Integration** - Built-in tools for file operations, shell commands, Python execution, and task management
- **MCP Support** - Connect to external tools and services via Model Context Protocol servers
- **Beautiful Terminal UI** - Modern, responsive interface with dark/light theme support
- **Real-time Streaming** - See AI responses as they're generated with smooth streaming updates
- **Extensible Architecture** - Easy to add new tools and capabilities
- **High Performance** - Async-first design for responsive interactions
- **Context Management** - Maintains state across tool executions for coherent workflows

## Installation

### Prerequisites

- Python 3.11 or higher

### Install from PyPI

```bash
# Install vibecore
pip install vibecore

# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"

# Run vibecore
vibecore
```

### Install from Source

```bash
# Clone the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore

# Install with pip
pip install -e .

# Or install with uv (recommended for development)
uv sync

# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"

# Run vibecore
vibecore
# or with uv
uv run vibecore
```

## Usage

### Basic Commands

Once vibecore is running, you can:

- **Chat naturally** - Type messages and press Enter to send
- **Toggle theme** - Press `Ctrl+Shift+D` to toggle dark/light
- **Cancel agent** - Press `Esc` to cancel the current operation
- **Navigate history** - Use `Up/Down` arrows
- **Exit** - Press `Ctrl+D` twice to confirm

### Commands

- `/help` - Show help and keyboard shortcuts
- `/clear` - Clear the current session and start a new one

### Available Tools

vibecore comes with powerful built-in tools:

#### File Operations
```
- Read files and directories
- Write and edit files
- Multi-edit for batch file modifications
- Pattern matching with glob
```

#### Shell Commands
```
- Execute bash commands
- Search with grep
- List directory contents
- File system navigation
```

#### Python Execution
```
- Run Python code in isolated environments
- Persistent execution context
- Full standard library access
```

#### Task Management
```
- Create and manage todo lists
- Track task progress
- Organize complex workflows
```

### MCP (Model Context Protocol) Support

vibecore supports the [Model Context Protocol](https://modelcontextprotocol.io/), allowing you to connect to external tools and services through MCP servers.

#### Configuring MCP Servers

Create a `config.yaml` file in your project directory or add MCP servers to your environment:

```yaml
mcp_servers:
  # Filesystem server for enhanced file operations
  - name: filesystem
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
    
  # GitHub integration
  - name: github
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-github"]
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: "your-github-token"
    
  # Custom HTTP server
  - name: my-server
    type: http
    url: "http://localhost:8080/mcp"
    allowed_tools: ["specific_tool"]  # Optional: whitelist specific tools
```

#### Available MCP Server Types

- **stdio**: Spawns a local process (npm packages, executables)
- **sse**: Server-Sent Events connection
- **http**: HTTP-based MCP servers

#### Tool Filtering

Control which tools are available from each server:

```yaml
mcp_servers:
  - name: restricted-server
    type: stdio
    command: some-command
    allowed_tools: ["safe_read", "safe_write"]  # Only these tools available
    blocked_tools: ["dangerous_delete"]         # These tools are blocked
```

## Development

### Setting Up Development Environment

```bash
# Clone and enter the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore

# Install dependencies
uv sync

# Run tests
uv run pytest

# Run tests by category
uv run pytest tests/ui/        # UI and widget tests
uv run pytest tests/tools/     # Tool functionality tests
uv run pytest tests/session/   # Session tests

# Run linting and formatting
uv run ruff check .
uv run ruff format .

# Type checking
uv run pyright
```

### Project Structure

```
vibecore/
├── src/vibecore/
│   ├── main.py              # Application entry point & TUI orchestration
│   ├── context.py           # Central state management for agents
│   ├── settings.py          # Configuration with Pydantic
│   ├── agents/              # Agent configurations & handoffs
│   │   └── default.py       # Main agent with tool integrations
│   ├── models/              # LLM provider integrations
│   │   └── anthropic.py     # Claude model support via LiteLLM
│   ├── mcp/                 # Model Context Protocol integration
│   │   └── manager.py       # MCP server lifecycle management
│   ├── handlers/            # Stream processing handlers
│   │   └── stream_handler.py # Handle streaming agent responses
│   ├── session/             # Session management
│   │   ├── jsonl_session.py # JSONL-based conversation storage
│   │   └── loader.py        # Session loading logic
│   ├── widgets/             # Custom Textual UI components
│   │   ├── core.py          # Base widgets & layouts
│   │   ├── messages.py      # Message display components
│   │   ├── tool_message_factory.py  # Factory for creating tool messages
│   │   ├── core.tcss        # Core styling
│   │   └── messages.tcss    # Message-specific styles
│   ├── tools/               # Extensible tool system
│   │   ├── base.py          # Tool interfaces & protocols
│   │   ├── file/            # File manipulation tools
│   │   ├── shell/           # Shell command execution
│   │   ├── python/          # Python code interpreter
│   │   └── todo/            # Task management system
│   └── prompts/             # System prompts & instructions
├── tests/                   # Comprehensive test suite
│   ├── ui/                  # UI and widget tests
│   ├── tools/               # Tool functionality tests
│   ├── session/             # Session and storage tests
│   ├── cli/                 # CLI and command tests
│   ├── models/              # Model integration tests
│   └── _harness/            # Test utilities
├── pyproject.toml           # Project configuration & dependencies
├── uv.lock                  # Locked dependencies
└── CLAUDE.md                # AI assistant instructions
```

### Code Quality

We maintain high code quality standards:

- **Linting**: Ruff for fast, comprehensive linting
- **Formatting**: Ruff formatter for consistent code style
- **Type Checking**: Pyright for static type analysis
- **Testing**: Pytest for comprehensive test coverage

Run all checks:
```bash
uv run ruff check . && uv run ruff format --check . && uv run pyright . && uv run pytest
```

## Configuration

### Reasoning Effort

- Set default via env var: `VIBECORE_REASONING_EFFORT` (minimal | low | medium | high)
- Keyword triggers: `think` → low, `think hard` → medium, `ultrathink` → high

### Environment Variables

```bash
# Model configuration
ANTHROPIC_API_KEY=sk-...        # For Claude models
OPENAI_API_KEY=sk-...          # For GPT models

# OpenAI Models
VIBECORE_DEFAULT_MODEL=o3
VIBECORE_DEFAULT_MODEL=gpt-4.1
# Claude
VIBECORE_DEFAULT_MODEL=anthropic/claude-sonnet-4-20250514
# Use any LiteLLM supported models
VIBECORE_DEFAULT_MODEL=litellm/deepseek/deepseek-chat
# Local models. Use with OPENAI_BASE_URL
VIBECORE_DEFAULT_MODEL=qwen3-30b-a3b-mlx@8bit
```

## Contributing

We welcome contributions! Here's how to get started:

1. **Fork the repository** and create your branch from `main`
2. **Make your changes** and ensure all tests pass
3. **Add tests** for any new functionality
4. **Update documentation** as needed
5. **Submit a pull request** with a clear description

### Development Guidelines

- Follow the existing code style and patterns
- Write descriptive commit messages
- Add type hints to all functions
- Ensure your code passes all quality checks
- Update tests for any changes

### Reporting Issues

Found a bug or have a feature request? Please [open an issue](https://github.com/serialx/vibecore/issues) with:
- Clear description of the problem or feature
- Steps to reproduce (for bugs)
- Expected vs actual behavior
- Environment details (OS, Python version)

## Architecture

vibecore is built with a modular, extensible architecture:

- **Textual Framework**: Provides the responsive TUI foundation
- **OpenAI Agents SDK**: Powers the AI agent capabilities
- **Async Design**: Ensures smooth, non-blocking interactions
- **Tool System**: Modular tools with consistent interfaces
- **Context Management**: Maintains state across operations

## Recent Updates

- **Reasoning View**: New ReasoningMessage widget with live reasoning summaries during streaming
- **Context Usage Bar & CWD**: Footer shows token usage progress and current working directory
- **Keyboard & Commands**: Ctrl+Shift+D toggles theme, Esc cancels, Ctrl+D double-press to exit, `/help` and `/clear` commands
- **MCP Tool Output**: Improved rendering with Markdown and JSON prettification
- **MCP Support**: Full integration with Model Context Protocol for external tool connections
- **Print Mode**: `-p` flag to print response and exit for pipes/automation

## Roadmap

- [x] More custom tool views (Python, Read, Todo widgets)
- [x] Automation (vibecore -p "prompt")
- [x] MCP (Model Context Protocol) support
- [ ] Permission model
- [ ] Multi-agent system (agent-as-tools)
- [ ] Plugin system for custom tools
- [ ] Automated workflow

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Acknowledgments

- Built with [Textual](https://textual.textualize.io/) - The amazing TUI framework
- Powered by [OpenAI Agents SDK](https://github.com/openai/openai-agents-python)
- Inspired by the growing ecosystem of terminal-based AI tools

---

<div align="center">

**Made with love by the vibecore community**

[Report Bug](https://github.com/serialx/vibecore/issues) " [Request Feature](https://github.com/serialx/vibecore/issues) " [Join Discussions](https://github.com/serialx/vibecore/discussions)

</div>
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "vibecore",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "agents, ai, anthropic, assistant, automation, claude, cli, framework, gpt, mcp, model-context-protocol, openai, terminal, textual, tool-use, tui",
    "author": null,
    "author_email": "Sung-jin Brian Hong <serialx@serialx.net>",
    "download_url": "https://files.pythonhosted.org/packages/f6/55/0c53ccf9ff2ad39bdb7d59c53d25a95aedcff72b2022c4a3058140092a70/vibecore-0.2.0.tar.gz",
    "platform": null,
    "description": "# vibecore\n\n<div align=\"center\">\n\n[![PyPI version](https://badge.fury.io/py/vibecore.svg)](https://badge.fury.io/py/vibecore)\n[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![PyPI downloads](https://img.shields.io/pypi/dm/vibecore.svg)](https://pypistats.org/packages/vibecore)\n[![Code style: ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)\n[![Checked with pyright](https://microsoft.github.io/pyright/img/pyright_badge.svg)](https://microsoft.github.io/pyright/)\n\n**Build your own AI-powered automation tools in the terminal with this extensible agent framework**\n\n[Features](#features) \" [Installation](#installation) \" [Usage](#usage) \" [Development](#development) \" [Contributing](#contributing)\n\n</div>\n\n---\n\n<p align=\"center\" style=\"max-width: 800px; margin: 0 auto;\">\n    <img src=\"docs/images/screenshot.png\" alt=\"vibecore terminal screenshot\" style=\"max-width: 100%; height: auto;\">\n</p>\n\n## Overview\n\nvibecore is a **Do-it-yourself Agent Framework** that transforms your terminal into a powerful AI workspace. More than just a chat interface, it's a complete platform for building and orchestrating custom AI agents that can manipulate files, execute code, run shell commands, and manage complex workflows\u2014all from the comfort of your terminal.\n\nBuilt on [Textual](https://textual.textualize.io/) and the [OpenAI Agents SDK](https://github.com/openai/openai-agents-python), vibecore provides the foundation for creating your own AI-powered automation tools. Whether you're automating development workflows, building custom AI assistants, or experimenting with agent-based systems, vibecore gives you the building blocks to craft exactly what you need.\n\n### Key Features\n\n- **AI-Powered Chat Interface** - Interact with state-of-the-art language models through an intuitive terminal interface\n- **Rich Tool Integration** - Built-in tools for file operations, shell commands, Python execution, and task management\n- **MCP Support** - Connect to external tools and services via Model Context Protocol servers\n- **Beautiful Terminal UI** - Modern, responsive interface with dark/light theme support\n- **Real-time Streaming** - See AI responses as they're generated with smooth streaming updates\n- **Extensible Architecture** - Easy to add new tools and capabilities\n- **High Performance** - Async-first design for responsive interactions\n- **Context Management** - Maintains state across tool executions for coherent workflows\n\n## Installation\n\n### Prerequisites\n\n- Python 3.11 or higher\n\n### Install from PyPI\n\n```bash\n# Install vibecore\npip install vibecore\n\n# Configure your API key\nexport ANTHROPIC_API_KEY=\"your-api-key-here\"\n# or\nexport OPENAI_API_KEY=\"your-api-key-here\"\n\n# Run vibecore\nvibecore\n```\n\n### Install from Source\n\n```bash\n# Clone the repository\ngit clone https://github.com/serialx/vibecore.git\ncd vibecore\n\n# Install with pip\npip install -e .\n\n# Or install with uv (recommended for development)\nuv sync\n\n# Configure your API key\nexport ANTHROPIC_API_KEY=\"your-api-key-here\"\n# or\nexport OPENAI_API_KEY=\"your-api-key-here\"\n\n# Run vibecore\nvibecore\n# or with uv\nuv run vibecore\n```\n\n## Usage\n\n### Basic Commands\n\nOnce vibecore is running, you can:\n\n- **Chat naturally** - Type messages and press Enter to send\n- **Toggle theme** - Press `Ctrl+Shift+D` to toggle dark/light\n- **Cancel agent** - Press `Esc` to cancel the current operation\n- **Navigate history** - Use `Up/Down` arrows\n- **Exit** - Press `Ctrl+D` twice to confirm\n\n### Commands\n\n- `/help` - Show help and keyboard shortcuts\n- `/clear` - Clear the current session and start a new one\n\n### Available Tools\n\nvibecore comes with powerful built-in tools:\n\n#### File Operations\n```\n- Read files and directories\n- Write and edit files\n- Multi-edit for batch file modifications\n- Pattern matching with glob\n```\n\n#### Shell Commands\n```\n- Execute bash commands\n- Search with grep\n- List directory contents\n- File system navigation\n```\n\n#### Python Execution\n```\n- Run Python code in isolated environments\n- Persistent execution context\n- Full standard library access\n```\n\n#### Task Management\n```\n- Create and manage todo lists\n- Track task progress\n- Organize complex workflows\n```\n\n### MCP (Model Context Protocol) Support\n\nvibecore supports the [Model Context Protocol](https://modelcontextprotocol.io/), allowing you to connect to external tools and services through MCP servers.\n\n#### Configuring MCP Servers\n\nCreate a `config.yaml` file in your project directory or add MCP servers to your environment:\n\n```yaml\nmcp_servers:\n  # Filesystem server for enhanced file operations\n  - name: filesystem\n    type: stdio\n    command: npx\n    args: [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/path/to/allowed/directory\"]\n    \n  # GitHub integration\n  - name: github\n    type: stdio\n    command: npx\n    args: [\"-y\", \"@modelcontextprotocol/server-github\"]\n    env:\n      GITHUB_PERSONAL_ACCESS_TOKEN: \"your-github-token\"\n    \n  # Custom HTTP server\n  - name: my-server\n    type: http\n    url: \"http://localhost:8080/mcp\"\n    allowed_tools: [\"specific_tool\"]  # Optional: whitelist specific tools\n```\n\n#### Available MCP Server Types\n\n- **stdio**: Spawns a local process (npm packages, executables)\n- **sse**: Server-Sent Events connection\n- **http**: HTTP-based MCP servers\n\n#### Tool Filtering\n\nControl which tools are available from each server:\n\n```yaml\nmcp_servers:\n  - name: restricted-server\n    type: stdio\n    command: some-command\n    allowed_tools: [\"safe_read\", \"safe_write\"]  # Only these tools available\n    blocked_tools: [\"dangerous_delete\"]         # These tools are blocked\n```\n\n## Development\n\n### Setting Up Development Environment\n\n```bash\n# Clone and enter the repository\ngit clone https://github.com/serialx/vibecore.git\ncd vibecore\n\n# Install dependencies\nuv sync\n\n# Run tests\nuv run pytest\n\n# Run tests by category\nuv run pytest tests/ui/        # UI and widget tests\nuv run pytest tests/tools/     # Tool functionality tests\nuv run pytest tests/session/   # Session tests\n\n# Run linting and formatting\nuv run ruff check .\nuv run ruff format .\n\n# Type checking\nuv run pyright\n```\n\n### Project Structure\n\n```\nvibecore/\n\u251c\u2500\u2500 src/vibecore/\n\u2502   \u251c\u2500\u2500 main.py              # Application entry point & TUI orchestration\n\u2502   \u251c\u2500\u2500 context.py           # Central state management for agents\n\u2502   \u251c\u2500\u2500 settings.py          # Configuration with Pydantic\n\u2502   \u251c\u2500\u2500 agents/              # Agent configurations & handoffs\n\u2502   \u2502   \u2514\u2500\u2500 default.py       # Main agent with tool integrations\n\u2502   \u251c\u2500\u2500 models/              # LLM provider integrations\n\u2502   \u2502   \u2514\u2500\u2500 anthropic.py     # Claude model support via LiteLLM\n\u2502   \u251c\u2500\u2500 mcp/                 # Model Context Protocol integration\n\u2502   \u2502   \u2514\u2500\u2500 manager.py       # MCP server lifecycle management\n\u2502   \u251c\u2500\u2500 handlers/            # Stream processing handlers\n\u2502   \u2502   \u2514\u2500\u2500 stream_handler.py # Handle streaming agent responses\n\u2502   \u251c\u2500\u2500 session/             # Session management\n\u2502   \u2502   \u251c\u2500\u2500 jsonl_session.py # JSONL-based conversation storage\n\u2502   \u2502   \u2514\u2500\u2500 loader.py        # Session loading logic\n\u2502   \u251c\u2500\u2500 widgets/             # Custom Textual UI components\n\u2502   \u2502   \u251c\u2500\u2500 core.py          # Base widgets & layouts\n\u2502   \u2502   \u251c\u2500\u2500 messages.py      # Message display components\n\u2502   \u2502   \u251c\u2500\u2500 tool_message_factory.py  # Factory for creating tool messages\n\u2502   \u2502   \u251c\u2500\u2500 core.tcss        # Core styling\n\u2502   \u2502   \u2514\u2500\u2500 messages.tcss    # Message-specific styles\n\u2502   \u251c\u2500\u2500 tools/               # Extensible tool system\n\u2502   \u2502   \u251c\u2500\u2500 base.py          # Tool interfaces & protocols\n\u2502   \u2502   \u251c\u2500\u2500 file/            # File manipulation tools\n\u2502   \u2502   \u251c\u2500\u2500 shell/           # Shell command execution\n\u2502   \u2502   \u251c\u2500\u2500 python/          # Python code interpreter\n\u2502   \u2502   \u2514\u2500\u2500 todo/            # Task management system\n\u2502   \u2514\u2500\u2500 prompts/             # System prompts & instructions\n\u251c\u2500\u2500 tests/                   # Comprehensive test suite\n\u2502   \u251c\u2500\u2500 ui/                  # UI and widget tests\n\u2502   \u251c\u2500\u2500 tools/               # Tool functionality tests\n\u2502   \u251c\u2500\u2500 session/             # Session and storage tests\n\u2502   \u251c\u2500\u2500 cli/                 # CLI and command tests\n\u2502   \u251c\u2500\u2500 models/              # Model integration tests\n\u2502   \u2514\u2500\u2500 _harness/            # Test utilities\n\u251c\u2500\u2500 pyproject.toml           # Project configuration & dependencies\n\u251c\u2500\u2500 uv.lock                  # Locked dependencies\n\u2514\u2500\u2500 CLAUDE.md                # AI assistant instructions\n```\n\n### Code Quality\n\nWe maintain high code quality standards:\n\n- **Linting**: Ruff for fast, comprehensive linting\n- **Formatting**: Ruff formatter for consistent code style\n- **Type Checking**: Pyright for static type analysis\n- **Testing**: Pytest for comprehensive test coverage\n\nRun all checks:\n```bash\nuv run ruff check . && uv run ruff format --check . && uv run pyright . && uv run pytest\n```\n\n## Configuration\n\n### Reasoning Effort\n\n- Set default via env var: `VIBECORE_REASONING_EFFORT` (minimal | low | medium | high)\n- Keyword triggers: `think` \u2192 low, `think hard` \u2192 medium, `ultrathink` \u2192 high\n\n### Environment Variables\n\n```bash\n# Model configuration\nANTHROPIC_API_KEY=sk-...        # For Claude models\nOPENAI_API_KEY=sk-...          # For GPT models\n\n# OpenAI Models\nVIBECORE_DEFAULT_MODEL=o3\nVIBECORE_DEFAULT_MODEL=gpt-4.1\n# Claude\nVIBECORE_DEFAULT_MODEL=anthropic/claude-sonnet-4-20250514\n# Use any LiteLLM supported models\nVIBECORE_DEFAULT_MODEL=litellm/deepseek/deepseek-chat\n# Local models. Use with OPENAI_BASE_URL\nVIBECORE_DEFAULT_MODEL=qwen3-30b-a3b-mlx@8bit\n```\n\n## Contributing\n\nWe welcome contributions! Here's how to get started:\n\n1. **Fork the repository** and create your branch from `main`\n2. **Make your changes** and ensure all tests pass\n3. **Add tests** for any new functionality\n4. **Update documentation** as needed\n5. **Submit a pull request** with a clear description\n\n### Development Guidelines\n\n- Follow the existing code style and patterns\n- Write descriptive commit messages\n- Add type hints to all functions\n- Ensure your code passes all quality checks\n- Update tests for any changes\n\n### Reporting Issues\n\nFound a bug or have a feature request? Please [open an issue](https://github.com/serialx/vibecore/issues) with:\n- Clear description of the problem or feature\n- Steps to reproduce (for bugs)\n- Expected vs actual behavior\n- Environment details (OS, Python version)\n\n## Architecture\n\nvibecore is built with a modular, extensible architecture:\n\n- **Textual Framework**: Provides the responsive TUI foundation\n- **OpenAI Agents SDK**: Powers the AI agent capabilities\n- **Async Design**: Ensures smooth, non-blocking interactions\n- **Tool System**: Modular tools with consistent interfaces\n- **Context Management**: Maintains state across operations\n\n## Recent Updates\n\n- **Reasoning View**: New ReasoningMessage widget with live reasoning summaries during streaming\n- **Context Usage Bar & CWD**: Footer shows token usage progress and current working directory\n- **Keyboard & Commands**: Ctrl+Shift+D toggles theme, Esc cancels, Ctrl+D double-press to exit, `/help` and `/clear` commands\n- **MCP Tool Output**: Improved rendering with Markdown and JSON prettification\n- **MCP Support**: Full integration with Model Context Protocol for external tool connections\n- **Print Mode**: `-p` flag to print response and exit for pipes/automation\n\n## Roadmap\n\n- [x] More custom tool views (Python, Read, Todo widgets)\n- [x] Automation (vibecore -p \"prompt\")\n- [x] MCP (Model Context Protocol) support\n- [ ] Permission model\n- [ ] Multi-agent system (agent-as-tools)\n- [ ] Plugin system for custom tools\n- [ ] Automated workflow\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- Built with [Textual](https://textual.textualize.io/) - The amazing TUI framework\n- Powered by [OpenAI Agents SDK](https://github.com/openai/openai-agents-python)\n- Inspired by the growing ecosystem of terminal-based AI tools\n\n---\n\n<div align=\"center\">\n\n**Made with love by the vibecore community**\n\n[Report Bug](https://github.com/serialx/vibecore/issues) \" [Request Feature](https://github.com/serialx/vibecore/issues) \" [Join Discussions](https://github.com/serialx/vibecore/discussions)\n\n</div>",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Build your own AI-powered automation tools in the terminal with this extensible agent framework",
    "version": "0.2.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/serialx/vibecore/issues",
        "Discussions": "https://github.com/serialx/vibecore/discussions",
        "Documentation": "https://github.com/serialx/vibecore#readme",
        "Homepage": "https://github.com/serialx/vibecore",
        "Repository": "https://github.com/serialx/vibecore"
    },
    "split_keywords": [
        "agents",
        " ai",
        " anthropic",
        " assistant",
        " automation",
        " claude",
        " cli",
        " framework",
        " gpt",
        " mcp",
        " model-context-protocol",
        " openai",
        " terminal",
        " textual",
        " tool-use",
        " tui"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "05f83ceeb1ebd5e8fde566ff44762e113f883319deffeb1b3df678a458eaf95f",
                "md5": "44bb43208ac96c47bdddc8db67e69e8d",
                "sha256": "6ab5bb3a7aca36c191ff58a0b94699411a79ffd57eb70d0a7417b5da51ac9559"
            },
            "downloads": -1,
            "filename": "vibecore-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "44bb43208ac96c47bdddc8db67e69e8d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 78037,
            "upload_time": "2025-08-14T04:18:12",
            "upload_time_iso_8601": "2025-08-14T04:18:12.628997Z",
            "url": "https://files.pythonhosted.org/packages/05/f8/3ceeb1ebd5e8fde566ff44762e113f883319deffeb1b3df678a458eaf95f/vibecore-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f6550c53ccf9ff2ad39bdb7d59c53d25a95aedcff72b2022c4a3058140092a70",
                "md5": "85eb2bde4e1183b5ec42254f1e0c2670",
                "sha256": "aa933325e5a4ebc96725abc1596f810a0ca3cb3439f25aa0e4a4bc3e5991600d"
            },
            "downloads": -1,
            "filename": "vibecore-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "85eb2bde4e1183b5ec42254f1e0c2670",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 57818,
            "upload_time": "2025-08-14T04:18:13",
            "upload_time_iso_8601": "2025-08-14T04:18:13.908694Z",
            "url": "https://files.pythonhosted.org/packages/f6/55/0c53ccf9ff2ad39bdb7d59c53d25a95aedcff72b2022c4a3058140092a70/vibecore-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-14 04:18:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "serialx",
    "github_project": "vibecore",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "vibecore"
}
        
Elapsed time: 1.08714s