openchatbi


Nameopenchatbi JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryOpenChatBI - Natural language business intelligence powered by LLMs for intuitive data analysis and SQL generation
upload_time2025-10-09 10:23:58
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.11
licenseMIT
keywords agent ai analytics analyze data bi business intelligence conversational ai data agent database gpt langchain langgraph llm machine learning natural language nlp query data talk to data text2sql
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # OpenChatBI

OpenChatBI is an open source, chat-based intelligent BI tool powered by large language models, designed to help users 
query, analyze, and visualize data through natural language conversations. Built on LangGraph and LangChain ecosystem, 
it provides chat agents and workflows that support natural language to SQL conversion and streamlined data analysis.

<img src="https://github.com/zhongyu09/openchatbi/raw/main/example/demo.gif" alt="Demo" width="800">

## Core Features

1. **Natural Language Interaction**: Get data analysis results by asking questions in natural language
2. **Automatic SQL Generation**: Convert natural language queries into SQL statements using advanced text2sql workflows
   with schema linking and well organized prompt engineering
3. **Data Visualization**: Generate intuitive data visualizations (via plotly)
4. **Data Catalog Management**: Automatically discovers and indexes database table structures, supports flexible catalog 
   storage backends, and easily maintains business explanations for tables and columns as well as optimizes Prompts.
5. **Knowledge Base Integration**: Answer complex questions by combining catalog based knowledge retrival and external
   knowledge base retrival (via MCP tools)
6. **Code Execution**: Execute Python code for data analysis and visualization
7. **Interactive Problem-Solving**: Proactively ask users for more context when information is incomplete
8. **Persistent Memory**: Conversation management and user characteristic memory based on LangGraph checkpointing
9. **MCP Support**: Integration with MCP tools by configuration
10. **Web UI Interface**: Provide 2 sample UI: simple and streaming web interfaces using Gradio and Streamlit, easy to
   integrate with other web applications

## Roadmap

1. **Time Series Forecasting**: Forecasting models deployed in-house
2. **Root Cause Analysis Algorithm**: Multi-dimensional drill-down capabilities for anomaly investigation

# Getting started

## Installation & Setup

### Prerequisites

- Python 3.11 or higher
- Access to a supported LLM provider (OpenAI, Anthropic, etc.)
- Data Warehouse (Database) credentials (like Presto, PostgreSQL, MySQL, etc.)
- Docker (optional, required only for `docker` executor mode)

### Installation

1. **Using uv (recommended):**

```bash
git clone git@github.com:zhongyu09/openchatbi
uv sync
```

2. **Using pip:**

```bash
pip install git+https://github.com/zhongyu09/openchatbi@main
```

3. **For development:**

```bash
git clone git@github.com:zhongyu09/openchatbi
uv sync --group dev
```

4. If you have issues when installing pysqlite3 on macOS, try to install sqlite using Homebrew first:

```bash
brew install sqlite
brew info sqlite
export LDFLAGS="-L/opt/homebrew/opt/sqlite/lib"
export CPPFLAGS="-I/opt/homebrew/opt/sqlite/include"
```

### Run Demo

Run demo using **example dataset** from spider dataset, you need to provide "YOUR OPENAI API KEY" or change config to other LLM.
```bash
cp example/config.yaml openchatbi/config.yaml
sed -i 's/YOUR_API_KEY_HERE/[YOUR OPENAI API KEY]/g' openchatbi/config.yaml
python run_streamlit_ui.py
```

### Configuration

1. **Create configuration file**

Copy the configuration template:
```bash
cp openchatbi/config.yaml.template openchatbi/config.yaml
```
Or create an empty YAML file.

2. **Configure your LLMs:**

```yaml
default_llm:
  class: langchain_openai.ChatOpenAI
  params:
    api_key: YOUR_API_KEY_HERE
    model: gpt-4.1
    temperature: 0.02
    max_tokens: 8192
embedding_model:
  class: langchain_openai.OpenAIEmbeddings
  params:
    api_key: YOUR_API_KEY_HERE
    model: text-embedding-3-large
    chunk_size: 1024
```

3. **Configure your data warehouse:**

```yaml
organization: Your Company
dialect: presto
data_warehouse_config:
  uri: "presto://user@host:8080/catalog/schema"
  include_tables:
    - your_table_name
  database_name: "catalog.schema"
```

### Running the Application

1. **Invoking LangGraph:**

```bash
export CONFIG_FILE=YOUR_CONFIG_FILE_PATH
```

```python
from openchatbi import get_default_graph

graph = get_default_graph()
graph.invoke({"messages": [{"role": "user", "content": "Show me ctr trends for the past 7 days"}]},
    config={"configurable": {"thread_id": "1"}})
```

```
# System-generated SQL
SELECT date, SUM(clicks)/SUM(impression) AS ctr
FROM ad_performance
WHERE date >= CURRENT_DATE - 7 DAYS
GROUP BY date
ORDER BY date;
```

2. **Sample Web UI:**

Streamlit based UI:
```bash
streamlit run sample_ui streamlit_ui.py
```

Run Gradio based UI:
```bash
python sample_ui/streaming_ui.py
```

## Configuration Instructions

The configuration template is provided at `config.yaml.template`. Key configuration sections include:

### Basic Settings

- `organization`: Organization name (e.g., "Your Company")
- `dialect`: Database dialect (e.g., "presto")
- `bi_config_file`: Path to BI configuration file (e.g., "example/bi.yaml")

### Catalog Store Configuration

- `catalog_store`: Configuration for data catalog storage
    - `store_type`: Storage type (e.g., "file_system")
    - `data_path`: Path to catalog data stored by file system (e.g., "./example")

### Data Warehouse Configuration

- `data_warehouse_config`: Database connection settings
    - `uri`: Connection string for your database
    - `include_tables`: List of tables to include in catalog, leave empty to include all tables
    - `database_name`: Database name for catalog
    - `token_service`: Token service URL (for data warehouse that need token authentication like Presto)
    - `user_name` / `password`: Token service credentials

### LLM Configuration

Various LLMs are supported based on LangChain, see LangChain API
Document(https://python.langchain.com/api_reference/reference.html#integrations) for full list that support
`chat_models`. You can configure different LLMs for different tasks:

- `default_llm`: Primary language model for general tasks
- `embedding_model`: Model for embedding generation
- `text2sql_llm`: Specialized model for SQL generation

Commonly used LLM providers and their corresponding classes and installation commands:

- **Anthropic**: `langchain_anthropic.ChatAnthropic`, `pip install langchain-anthropic`
- **OpenAI**: `langchain_openai.ChatOpenAI`, `pip install langchain-openai`
- **Azure OpenAI**: `langchain_openai.AzureChatOpenAI`, `pip install langchain-openai`
- **Google Vertex AI**: `chat_models.ChatVertexAI`, `langchain-google-vertexai`
- **Bedrock**: `chat_models.bedrock.ChatBedrock`, `pip install langchain-aws`
- **Huggingface**: `chat_models.huggingface.ChatHuggingFace`, `pip install langchain-huggingface`
- **Deepseek**: `chat_models.ChatDeepSeek`, `pip install langchain-deepseek`
- **Ollama**: `chat_models.ChatOllama`, `pip install langchain-ollama`

### Advanced Configuration

OpenChatBI supports sophisticated customization through prompt engineering and catalog management features:

- **Prompt Engineering Configuration**: Customize system prompts, business glossaries, and data warehouse introductions
- **Data Catalog Management**: Configure table metadata, column descriptions, and SQL generation rules
- **Business Rules**: Define table selection criteria and domain-specific SQL constraints

For detailed configuration options and examples, see the [Advanced Features](#advanced-features) section.

## Architecture Overview

OpenChatBI is built using a modular architecture with clear separation of concerns:

1. **LangGraph Workflows**: Core orchestration using state machines for complex multi-step processes
2. **Catalog Management**: Flexible data catalog system supporting multiple storage backends
3. **Text2SQL Pipeline**: Advanced natural language to SQL conversion with schema linking
4. **Code Execution**: Sandboxed Python execution environment for data analysis
5. **Tool Integration**: Extensible tool system for human interaction and knowledge search
6. **Persistent Memory**: SQLite-based conversation state management

## Technology Stack

- **Frameworks**: LangGraph, LangChain, FastAPI, Gradio/Streamlit
- **Large Language Models**: Azure OpenAI (GPT-4), Anthropic Claude, OpenAI GPT models
- **Databases**: Presto, Trino, MySQL with SQLAlchemy support
- **Code Execution**: Local Python, RestrictedPython, Docker containerization
- **Development**: Python 3.11+, with modern tooling (Black, Ruff, MyPy, Pytest)
- **Storage**: SQLite for conversation checkpointing, file system catalog storage

## Project Structure

```
openchatbi/
├── README.md                    # Project documentation
├── pyproject.toml               # Modern Python project configuration
├── Dockerfile.python-executor  # Docker image for isolated code execution
├── run_tests.py                # Test runner script
├── run_streamlit_ui.py         # Streamlit UI launcher
├── openchatbi/                 # Core application code
│   ├── __init__.py             # Package initialization
│   ├── config.yaml.template    # Configuration template
│   ├── config_loader.py        # Configuration management
│   ├── constants.py            # Application constants
│   ├── agent_graph.py          # Main LangGraph workflow
│   ├── graph_state.py          # State definition for workflows
│   ├── utils.py                # Utility functions
│   ├── catalog/                # Data catalog management
│   │   ├── __init__.py         # Package initialization
│   │   ├── catalog_loader.py   # Catalog loading logic
│   │   ├── catalog_store.py    # Catalog storage interface
│   │   ├── entry.py            # Catalog entry points
│   │   ├── factory.py          # Catalog factory patterns
│   │   ├── helper.py           # Catalog helper functions
│   │   ├── schema_retrival.py  # Schema retrieval logic
│   │   └── token_service.py    # Token service integration
│   ├── code/                   # Code execution framework
│   │   ├── __init__.py         # Package initialization
│   │   ├── executor_base.py    # Base executor interface
│   │   ├── local_executor.py   # Local Python execution
│   │   ├── restricted_local_executor.py # RestrictedPython execution
│   │   └── docker_executor.py  # Docker-based isolated execution
│   ├── llm/                    # LLM integration layer
│   │   ├── __init__.py         # Package initialization
│   │   └── llm.py              # LLM management and retry logic
│   ├── prompts/                # Prompt templates and engineering
│   │   ├── __init__.py         # Package initialization
│   │   ├── agent_prompt.md     # Main agent prompts
│   │   ├── extraction_prompt.md # Information extraction prompts
│   │   ├── system_prompt.py    # System prompt management
│   │   ├── table_selection_prompt.md # Table selection prompts
│   │   └── text2sql_prompt.md  # Text-to-SQL prompts
│   ├── text2sql/               # Text-to-SQL conversion pipeline
│   │   ├── __init__.py         # Package initialization
│   │   ├── data.py             # Data and retriever for Text-to-SQL
│   │   ├── extraction.py       # Information extraction
│   │   ├── generate_sql.py     # SQL generation and execution logic
│   │   ├── schema_linking.py   # Schema linking process
│   │   ├── sql_graph.py        # SQL generation LangGraph workflow
│   │   ├── text2sql_utils.py   # Text2SQL utilities
│   │   └── visualization.py    # Data visualization functions
│   └── tool/                   # LangGraph tools and functions
│       ├── ask_human.py        # Human-in-the-loop interactions
│       ├── memory.py           # Memory management tool
│       ├── mcp_tools.py        # MCP (Model Context Protocol) integration
│       ├── run_python_code.py  # Configurable Python code execution
│       ├── save_report.py      # Report saving functionality
│       └── search_knowledge.py # Knowledge base search
├── sample_api/                 # API implementations
│   └── async_api.py            # Asynchronous FastAPI example
├── sample_ui/                  # Web interface implementations
│   ├── memory_ui.py            # Memory-enhanced UI interface
│   ├── plotly_utils.py         # Plotly utilities and helpers
│   ├── simple_ui.py            # Simple non-streaming Gradio UI
│   ├── streaming_ui.py         # Streaming Gradio UI with real-time updates
│   ├── streamlit_ui.py         # Streaming Streamlit UI with enhanced features
│   └── style.py                # UI styling and CSS
├── example/                    # Example configurations and data
│   ├── bi.yaml                 # BI configuration example
│   ├── config.yaml             # Application config example
│   ├── table_info.yaml         # Table information
│   ├── table_columns.csv       # Table column registry
│   ├── common_columns.csv      # Common column definitions
│   ├── sql_example.yaml        # SQL examples for retrieval
│   ├── table_selection_example.csv # Table selection examples
│   └── tracking_orders.sqlite  # Sample SQLite database
├── tests/                      # Test suite
│   ├── __init__.py             # Package initialization
│   ├── conftest.py             # Test configuration
│   ├── test_*.py               # Test modules for various components
│   └── README.md               # Testing documentation
├── docs/                       # Documentation
│   ├── source/                 # Sphinx documentation source
│   ├── build/                  # Built documentation
│   ├── Makefile                # Documentation build scripts
│   └── make.bat                # Windows build script
└── .github/                    # GitHub workflows and templates
    └── workflows/              # CI/CD workflows
```

## Advanced Features

### Visualization configuration
You can choose rule-based or llm-based visualization or disable visualization.
```yaml
# Options: "rule" (rule-based), "llm" (LLM-based), or null (skip visualization)
visualization_mode: llm
```

### Prompt Engineering
#### Basic Knowledge & Glossary

You can define basic knowledge and glossary in `example/bi.yaml`, for example:

```yaml
basic_knowledge_glossary: |
  # Basic Knowledge Introduction
    The basic knowledge about your company and its business, including key concepts, metrics, and processes.
  # Glossary
    Common terms and their definitions used in your business context.
```

#### Data Warehouse Introduction

You can provide a brief introduction of your data warehouse in `example/bi.yaml`, for example:

```yaml
data_warehouse_introduction: |
  # Data Warehouse Introduction
    This data warehouse is built on Presto and contains various tables related to XXXXX.
    The main fact tables include XXXX metrics, while dimension tables include XXXXX.
    The data is updated hourly and is used for reporting and analysis purposes.
```

#### Table Selection Rules

You can configure table selection rules in `example/bi.yaml`, for example:

```yaml
table_selection_extra_rule: |
  - All tables with is_valid can support both valid and invalid traffics
```

#### Custom SQL Rules

You can define your additional SQL Generation rules for tables in `example/table_info.yaml`, for example:

```yaml
sql_rule: |
  ### SQL Rules
  - All event_date in the table are stored in **UTC**. If the user specifies a timezone (e.g., CET, PST), convert between timezones accordingly.

```


### Catalog Management

#### Introduction

High-quality catalog data is essential for accurate Text2SQL generation and data analysis. OpenChatBI automatically 
discovers and indexes data warehouse table structures while providing flexible management for business metadata, column 
descriptions, and query optimization rules.

#### Catalog Structure

The catalog system organizes metadata in a hierarchical structure:

**Database Level**
- Top-level container for all tables and schemas

**Table Level**
- `description`: Business functionality and purpose of the table
- `selection_rule`: Guidelines for when and how to use this table in queries
- `sql_rule`: Specific SQL generation rules and constraints for this table

**Column Level**
- **Required Fields**: Essential metadata for each column to enable effective Text2SQL generation
  - `column_name`: Technical database column name
  - `display_name`: Human-readable name for business users
  - `alias`: Alternative names or abbreviations
  - `type`: Data type (string, integer, date, etc.)
  - `category`: Business category, dimension or metric
  - `tag`: Additional labels for filtering and organization
  - `description`: Detailed explanation of column purpose and usage
- **Two Types** of Columns
  - **Common Columns**: Columns with standardized business meanings shared across tables
  - **Table-Specific Columns**: Columns with context-dependent meanings that vary between tables
- **Derived Metrics**: Virtual metrics calculated from existing columns using SQL formulas
  - Computed dynamically during query execution rather than stored as physical columns
  - Examples: CTR (clicks/impressions), conversion rates, profit margins
  - Enable complex business calculations without pre-computing values
  
#### Loading Catalog from Database

OpenChatBI can automatically discover and load table structures from your data warehouse:

1. **Automatic Discovery**: Connects to your configured data warehouse and scans table schemas
2. **Metadata Extraction**: Extracts column names, data types, and basic structural information
3. **Incremental Updates**: Supports updating catalog data as your database schema evolves

Configure automatic catalog loading in your `config.yaml`:

```yaml
catalog_store:
  store_type: file_system
  data_path: ./catalog_data
data_warehouse_config:
  include_tables:
    - your_table_pattern
  # Leave empty to include all accessible tables
```

#### File System Catalog Store

The file system catalog store organizes metadata across multiple files for maintainability and version control:

**Core Table Information**
- `table_info.yaml`: Comprehensive table metadata organized hierarchically (database → table → information)
  - `type`: Table classification (e.g., "fact" for Fact Tables, "dimension" for Dimension Tables)
  - `description`: Business functionality and purpose
  - `selection_rule`: Usage guidelines in markdown list format (each line starts with `-`)
  - `sql_rule`: SQL generation rules in markdown header format (each rule starts with `####`)
  - `derived_metric`: Virtual metrics with calculation formulas, organized by groups:
    ```md
    #### Derived Ratio Metrics
    Click-through Rate (alias CTR): SUM(clicks) / SUM(impression)
    Conversion Rate (alias CVR): SUM(conversions) / SUM(clicks)
    ```

**Column Management**
- `table_columns.csv`: Basic column registry with schema `db_name,table_name,column_name`
- `table_spec_columns.csv`: Table-specific column metadata with full schema:
  `db_name,table_name,column_name,display_name,alias,type,category,tag,description`
- `common_columns.csv`: Shared column definitions across tables with schema:
  `column_name,display_name,alias,type,category,tag,description`

**Query Examples and Training Data**
- `table_selection_example.csv`: Table selection training examples with schema `question,selected_tables`
- `sql_example.yaml`: Query examples organized by database and table structure:
  ```yaml
  your_database:
    ad_performance: |
      Q: Show me CTR trends for the past 7 days
      A: SELECT date, SUM(clicks)/SUM(impressions) AS ctr
         FROM ad_performance
         WHERE date >= CURRENT_DATE - INTERVAL 7 DAY
         GROUP BY date
         ORDER BY date;
  ```


### Python Code Execution Configuration

OpenChatBI supports multiple execution environments for running Python code with different security and performance characteristics:

```yaml
# Python Code Execution Configuration
python_executor: local  # Options: "local", "restricted_local", "docker"
```

#### Executor Types

- **`local`** (Default)
  - **Performance**: Fastest execution
  - **Security**: Least secure (code runs in current Python process)
  - **Capabilities**: Full Python capabilities and library access
  - **Use Case**: Development environments, trusted code execution

- **`restricted_local`**
  - **Performance**: Moderate execution speed
  - **Security**: Moderate security with RestrictedPython sandboxing
  - **Capabilities**: Limited Python features (no imports, file access, etc.)
  - **Use Case**: Semi-trusted environments with controlled execution

- **`docker`**
  - **Performance**: Slower due to container overhead
  - **Security**: Highest security with complete process isolation
  - **Capabilities**: Full Python capabilities within isolated container
  - **Use Case**: Production environments, untrusted code execution
  - **Requirements**: Docker must be installed and running

#### Docker Executor Setup

For production deployments or when running untrusted code, the Docker executor provides complete isolation:

1. **Install Docker**: Download and install Docker Desktop or Docker Engine
2. **Configure executor**: Set `python_executor: docker` in your config
3. **Automatic setup**: OpenChatBI will automatically build the required Docker image
4. **Fallback behavior**: If Docker is unavailable, automatically falls back to local executor

**Docker Executor Features**:
- Pre-installed data science libraries (pandas, numpy, matplotlib, seaborn)
- Network isolation for security
- Automatic container cleanup
- Resource isolation from host system


## Development & Testing

### Code Quality Tools

The project uses modern Python tooling for code quality:

```bash
# Format code
uv run black .

# Lint code  
uv run ruff check .

# Type checking
uv run mypy openchatbi/

# Security scanning
uv run bandit -r openchatbi/
```

### Testing

Run the test suite:

```bash
# Run all tests
uv run pytest

# Run with coverage
uv run pytest --cov=openchatbi --cov-report=html

# Run specific test files
uv run pytest test/test_generate_sql.py
uv run pytest test/test_agent_graph.py
```

### Pre-commit Hooks

Install pre-commit hooks for automatic code quality checks:

```bash
uv run pre-commit install
```

## Contribution Guidelines

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/fooBar`)
3. Commit your changes (`git commit -am 'Add some fooBar'`)
4. Push to the branch (`git push origin feature/fooBar`)
5. Create a new Pull Request

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details

## Contact & Support

- **Author**: Yu Zhong ([zhongyu8@gmail.com](mailto:zhongyu8@gmail.com))
- **Repository**: [github.com/zhongyu09/openchatbi](https://github.com/zhongyu09/openchatbi)
- **Issues**: [Report bugs and feature requests](https://github.com/zhongyu09/openchatbi/issues)
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "openchatbi",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": "agent, ai, analytics, analyze data, bi, business intelligence, conversational ai, data agent, database, gpt, langchain, langgraph, llm, machine learning, natural language, nlp, query data, talk to data, text2sql",
    "author": null,
    "author_email": "Yu Zhong <zhongyu8@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/77/e5/f7272bcb78a288aca158052c5cb60906916ea0732b8f8e37e8f6def0d964/openchatbi-0.1.1.tar.gz",
    "platform": null,
    "description": "# OpenChatBI\n\nOpenChatBI is an open source, chat-based intelligent BI tool powered by large language models, designed to help users \nquery, analyze, and visualize data through natural language conversations. Built on LangGraph and LangChain ecosystem, \nit provides chat agents and workflows that support natural language to SQL conversion and streamlined data analysis.\n\n<img src=\"https://github.com/zhongyu09/openchatbi/raw/main/example/demo.gif\" alt=\"Demo\" width=\"800\">\n\n## Core Features\n\n1. **Natural Language Interaction**: Get data analysis results by asking questions in natural language\n2. **Automatic SQL Generation**: Convert natural language queries into SQL statements using advanced text2sql workflows\n   with schema linking and well organized prompt engineering\n3. **Data Visualization**: Generate intuitive data visualizations (via plotly)\n4. **Data Catalog Management**: Automatically discovers and indexes database table structures, supports flexible catalog \n   storage backends, and easily maintains business explanations for tables and columns as well as optimizes Prompts.\n5. **Knowledge Base Integration**: Answer complex questions by combining catalog based knowledge retrival and external\n   knowledge base retrival (via MCP tools)\n6. **Code Execution**: Execute Python code for data analysis and visualization\n7. **Interactive Problem-Solving**: Proactively ask users for more context when information is incomplete\n8. **Persistent Memory**: Conversation management and user characteristic memory based on LangGraph checkpointing\n9. **MCP Support**: Integration with MCP tools by configuration\n10. **Web UI Interface**: Provide 2 sample UI: simple and streaming web interfaces using Gradio and Streamlit, easy to\n   integrate with other web applications\n\n## Roadmap\n\n1. **Time Series Forecasting**: Forecasting models deployed in-house\n2. **Root Cause Analysis Algorithm**: Multi-dimensional drill-down capabilities for anomaly investigation\n\n# Getting started\n\n## Installation & Setup\n\n### Prerequisites\n\n- Python 3.11 or higher\n- Access to a supported LLM provider (OpenAI, Anthropic, etc.)\n- Data Warehouse (Database) credentials (like Presto, PostgreSQL, MySQL, etc.)\n- Docker (optional, required only for `docker` executor mode)\n\n### Installation\n\n1. **Using uv (recommended):**\n\n```bash\ngit clone git@github.com:zhongyu09/openchatbi\nuv sync\n```\n\n2. **Using pip:**\n\n```bash\npip install git+https://github.com/zhongyu09/openchatbi@main\n```\n\n3. **For development:**\n\n```bash\ngit clone git@github.com:zhongyu09/openchatbi\nuv sync --group dev\n```\n\n4. If you have issues when installing pysqlite3 on macOS, try to install sqlite using Homebrew first:\n\n```bash\nbrew install sqlite\nbrew info sqlite\nexport LDFLAGS=\"-L/opt/homebrew/opt/sqlite/lib\"\nexport CPPFLAGS=\"-I/opt/homebrew/opt/sqlite/include\"\n```\n\n### Run Demo\n\nRun demo using **example dataset** from spider dataset, you need to provide \"YOUR OPENAI API KEY\" or change config to other LLM.\n```bash\ncp example/config.yaml openchatbi/config.yaml\nsed -i 's/YOUR_API_KEY_HERE/[YOUR OPENAI API KEY]/g' openchatbi/config.yaml\npython run_streamlit_ui.py\n```\n\n### Configuration\n\n1. **Create configuration file**\n\nCopy the configuration template:\n```bash\ncp openchatbi/config.yaml.template openchatbi/config.yaml\n```\nOr create an empty YAML file.\n\n2. **Configure your LLMs:**\n\n```yaml\ndefault_llm:\n  class: langchain_openai.ChatOpenAI\n  params:\n    api_key: YOUR_API_KEY_HERE\n    model: gpt-4.1\n    temperature: 0.02\n    max_tokens: 8192\nembedding_model:\n  class: langchain_openai.OpenAIEmbeddings\n  params:\n    api_key: YOUR_API_KEY_HERE\n    model: text-embedding-3-large\n    chunk_size: 1024\n```\n\n3. **Configure your data warehouse:**\n\n```yaml\norganization: Your Company\ndialect: presto\ndata_warehouse_config:\n  uri: \"presto://user@host:8080/catalog/schema\"\n  include_tables:\n    - your_table_name\n  database_name: \"catalog.schema\"\n```\n\n### Running the Application\n\n1. **Invoking LangGraph:**\n\n```bash\nexport CONFIG_FILE=YOUR_CONFIG_FILE_PATH\n```\n\n```python\nfrom openchatbi import get_default_graph\n\ngraph = get_default_graph()\ngraph.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"Show me ctr trends for the past 7 days\"}]},\n    config={\"configurable\": {\"thread_id\": \"1\"}})\n```\n\n```\n# System-generated SQL\nSELECT date, SUM(clicks)/SUM(impression) AS ctr\nFROM ad_performance\nWHERE date >= CURRENT_DATE - 7 DAYS\nGROUP BY date\nORDER BY date;\n```\n\n2. **Sample Web UI:**\n\nStreamlit based UI:\n```bash\nstreamlit run sample_ui streamlit_ui.py\n```\n\nRun Gradio based UI:\n```bash\npython sample_ui/streaming_ui.py\n```\n\n## Configuration Instructions\n\nThe configuration template is provided at `config.yaml.template`. Key configuration sections include:\n\n### Basic Settings\n\n- `organization`: Organization name (e.g., \"Your Company\")\n- `dialect`: Database dialect (e.g., \"presto\")\n- `bi_config_file`: Path to BI configuration file (e.g., \"example/bi.yaml\")\n\n### Catalog Store Configuration\n\n- `catalog_store`: Configuration for data catalog storage\n    - `store_type`: Storage type (e.g., \"file_system\")\n    - `data_path`: Path to catalog data stored by file system (e.g., \"./example\")\n\n### Data Warehouse Configuration\n\n- `data_warehouse_config`: Database connection settings\n    - `uri`: Connection string for your database\n    - `include_tables`: List of tables to include in catalog, leave empty to include all tables\n    - `database_name`: Database name for catalog\n    - `token_service`: Token service URL (for data warehouse that need token authentication like Presto)\n    - `user_name` / `password`: Token service credentials\n\n### LLM Configuration\n\nVarious LLMs are supported based on LangChain, see LangChain API\nDocument(https://python.langchain.com/api_reference/reference.html#integrations) for full list that support\n`chat_models`. You can configure different LLMs for different tasks:\n\n- `default_llm`: Primary language model for general tasks\n- `embedding_model`: Model for embedding generation\n- `text2sql_llm`: Specialized model for SQL generation\n\nCommonly used LLM providers and their corresponding classes and installation commands:\n\n- **Anthropic**: `langchain_anthropic.ChatAnthropic`, `pip install langchain-anthropic`\n- **OpenAI**: `langchain_openai.ChatOpenAI`, `pip install langchain-openai`\n- **Azure OpenAI**: `langchain_openai.AzureChatOpenAI`, `pip install langchain-openai`\n- **Google Vertex AI**: `chat_models.ChatVertexAI`, `langchain-google-vertexai`\n- **Bedrock**: `chat_models.bedrock.ChatBedrock`, `pip install langchain-aws`\n- **Huggingface**: `chat_models.huggingface.ChatHuggingFace`, `pip install langchain-huggingface`\n- **Deepseek**: `chat_models.ChatDeepSeek`, `pip install langchain-deepseek`\n- **Ollama**: `chat_models.ChatOllama`, `pip install langchain-ollama`\n\n### Advanced Configuration\n\nOpenChatBI supports sophisticated customization through prompt engineering and catalog management features:\n\n- **Prompt Engineering Configuration**: Customize system prompts, business glossaries, and data warehouse introductions\n- **Data Catalog Management**: Configure table metadata, column descriptions, and SQL generation rules\n- **Business Rules**: Define table selection criteria and domain-specific SQL constraints\n\nFor detailed configuration options and examples, see the [Advanced Features](#advanced-features) section.\n\n## Architecture Overview\n\nOpenChatBI is built using a modular architecture with clear separation of concerns:\n\n1. **LangGraph Workflows**: Core orchestration using state machines for complex multi-step processes\n2. **Catalog Management**: Flexible data catalog system supporting multiple storage backends\n3. **Text2SQL Pipeline**: Advanced natural language to SQL conversion with schema linking\n4. **Code Execution**: Sandboxed Python execution environment for data analysis\n5. **Tool Integration**: Extensible tool system for human interaction and knowledge search\n6. **Persistent Memory**: SQLite-based conversation state management\n\n## Technology Stack\n\n- **Frameworks**: LangGraph, LangChain, FastAPI, Gradio/Streamlit\n- **Large Language Models**: Azure OpenAI (GPT-4), Anthropic Claude, OpenAI GPT models\n- **Databases**: Presto, Trino, MySQL with SQLAlchemy support\n- **Code Execution**: Local Python, RestrictedPython, Docker containerization\n- **Development**: Python 3.11+, with modern tooling (Black, Ruff, MyPy, Pytest)\n- **Storage**: SQLite for conversation checkpointing, file system catalog storage\n\n## Project Structure\n\n```\nopenchatbi/\n\u251c\u2500\u2500 README.md                    # Project documentation\n\u251c\u2500\u2500 pyproject.toml               # Modern Python project configuration\n\u251c\u2500\u2500 Dockerfile.python-executor  # Docker image for isolated code execution\n\u251c\u2500\u2500 run_tests.py                # Test runner script\n\u251c\u2500\u2500 run_streamlit_ui.py         # Streamlit UI launcher\n\u251c\u2500\u2500 openchatbi/                 # Core application code\n\u2502   \u251c\u2500\u2500 __init__.py             # Package initialization\n\u2502   \u251c\u2500\u2500 config.yaml.template    # Configuration template\n\u2502   \u251c\u2500\u2500 config_loader.py        # Configuration management\n\u2502   \u251c\u2500\u2500 constants.py            # Application constants\n\u2502   \u251c\u2500\u2500 agent_graph.py          # Main LangGraph workflow\n\u2502   \u251c\u2500\u2500 graph_state.py          # State definition for workflows\n\u2502   \u251c\u2500\u2500 utils.py                # Utility functions\n\u2502   \u251c\u2500\u2500 catalog/                # Data catalog management\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py         # Package initialization\n\u2502   \u2502   \u251c\u2500\u2500 catalog_loader.py   # Catalog loading logic\n\u2502   \u2502   \u251c\u2500\u2500 catalog_store.py    # Catalog storage interface\n\u2502   \u2502   \u251c\u2500\u2500 entry.py            # Catalog entry points\n\u2502   \u2502   \u251c\u2500\u2500 factory.py          # Catalog factory patterns\n\u2502   \u2502   \u251c\u2500\u2500 helper.py           # Catalog helper functions\n\u2502   \u2502   \u251c\u2500\u2500 schema_retrival.py  # Schema retrieval logic\n\u2502   \u2502   \u2514\u2500\u2500 token_service.py    # Token service integration\n\u2502   \u251c\u2500\u2500 code/                   # Code execution framework\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py         # Package initialization\n\u2502   \u2502   \u251c\u2500\u2500 executor_base.py    # Base executor interface\n\u2502   \u2502   \u251c\u2500\u2500 local_executor.py   # Local Python execution\n\u2502   \u2502   \u251c\u2500\u2500 restricted_local_executor.py # RestrictedPython execution\n\u2502   \u2502   \u2514\u2500\u2500 docker_executor.py  # Docker-based isolated execution\n\u2502   \u251c\u2500\u2500 llm/                    # LLM integration layer\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py         # Package initialization\n\u2502   \u2502   \u2514\u2500\u2500 llm.py              # LLM management and retry logic\n\u2502   \u251c\u2500\u2500 prompts/                # Prompt templates and engineering\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py         # Package initialization\n\u2502   \u2502   \u251c\u2500\u2500 agent_prompt.md     # Main agent prompts\n\u2502   \u2502   \u251c\u2500\u2500 extraction_prompt.md # Information extraction prompts\n\u2502   \u2502   \u251c\u2500\u2500 system_prompt.py    # System prompt management\n\u2502   \u2502   \u251c\u2500\u2500 table_selection_prompt.md # Table selection prompts\n\u2502   \u2502   \u2514\u2500\u2500 text2sql_prompt.md  # Text-to-SQL prompts\n\u2502   \u251c\u2500\u2500 text2sql/               # Text-to-SQL conversion pipeline\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py         # Package initialization\n\u2502   \u2502   \u251c\u2500\u2500 data.py             # Data and retriever for Text-to-SQL\n\u2502   \u2502   \u251c\u2500\u2500 extraction.py       # Information extraction\n\u2502   \u2502   \u251c\u2500\u2500 generate_sql.py     # SQL generation and execution logic\n\u2502   \u2502   \u251c\u2500\u2500 schema_linking.py   # Schema linking process\n\u2502   \u2502   \u251c\u2500\u2500 sql_graph.py        # SQL generation LangGraph workflow\n\u2502   \u2502   \u251c\u2500\u2500 text2sql_utils.py   # Text2SQL utilities\n\u2502   \u2502   \u2514\u2500\u2500 visualization.py    # Data visualization functions\n\u2502   \u2514\u2500\u2500 tool/                   # LangGraph tools and functions\n\u2502       \u251c\u2500\u2500 ask_human.py        # Human-in-the-loop interactions\n\u2502       \u251c\u2500\u2500 memory.py           # Memory management tool\n\u2502       \u251c\u2500\u2500 mcp_tools.py        # MCP (Model Context Protocol) integration\n\u2502       \u251c\u2500\u2500 run_python_code.py  # Configurable Python code execution\n\u2502       \u251c\u2500\u2500 save_report.py      # Report saving functionality\n\u2502       \u2514\u2500\u2500 search_knowledge.py # Knowledge base search\n\u251c\u2500\u2500 sample_api/                 # API implementations\n\u2502   \u2514\u2500\u2500 async_api.py            # Asynchronous FastAPI example\n\u251c\u2500\u2500 sample_ui/                  # Web interface implementations\n\u2502   \u251c\u2500\u2500 memory_ui.py            # Memory-enhanced UI interface\n\u2502   \u251c\u2500\u2500 plotly_utils.py         # Plotly utilities and helpers\n\u2502   \u251c\u2500\u2500 simple_ui.py            # Simple non-streaming Gradio UI\n\u2502   \u251c\u2500\u2500 streaming_ui.py         # Streaming Gradio UI with real-time updates\n\u2502   \u251c\u2500\u2500 streamlit_ui.py         # Streaming Streamlit UI with enhanced features\n\u2502   \u2514\u2500\u2500 style.py                # UI styling and CSS\n\u251c\u2500\u2500 example/                    # Example configurations and data\n\u2502   \u251c\u2500\u2500 bi.yaml                 # BI configuration example\n\u2502   \u251c\u2500\u2500 config.yaml             # Application config example\n\u2502   \u251c\u2500\u2500 table_info.yaml         # Table information\n\u2502   \u251c\u2500\u2500 table_columns.csv       # Table column registry\n\u2502   \u251c\u2500\u2500 common_columns.csv      # Common column definitions\n\u2502   \u251c\u2500\u2500 sql_example.yaml        # SQL examples for retrieval\n\u2502   \u251c\u2500\u2500 table_selection_example.csv # Table selection examples\n\u2502   \u2514\u2500\u2500 tracking_orders.sqlite  # Sample SQLite database\n\u251c\u2500\u2500 tests/                      # Test suite\n\u2502   \u251c\u2500\u2500 __init__.py             # Package initialization\n\u2502   \u251c\u2500\u2500 conftest.py             # Test configuration\n\u2502   \u251c\u2500\u2500 test_*.py               # Test modules for various components\n\u2502   \u2514\u2500\u2500 README.md               # Testing documentation\n\u251c\u2500\u2500 docs/                       # Documentation\n\u2502   \u251c\u2500\u2500 source/                 # Sphinx documentation source\n\u2502   \u251c\u2500\u2500 build/                  # Built documentation\n\u2502   \u251c\u2500\u2500 Makefile                # Documentation build scripts\n\u2502   \u2514\u2500\u2500 make.bat                # Windows build script\n\u2514\u2500\u2500 .github/                    # GitHub workflows and templates\n    \u2514\u2500\u2500 workflows/              # CI/CD workflows\n```\n\n## Advanced Features\n\n### Visualization configuration\nYou can choose rule-based or llm-based visualization or disable visualization.\n```yaml\n# Options: \"rule\" (rule-based), \"llm\" (LLM-based), or null (skip visualization)\nvisualization_mode: llm\n```\n\n### Prompt Engineering\n#### Basic Knowledge & Glossary\n\nYou can define basic knowledge and glossary in `example/bi.yaml`, for example:\n\n```yaml\nbasic_knowledge_glossary: |\n  # Basic Knowledge Introduction\n    The basic knowledge about your company and its business, including key concepts, metrics, and processes.\n  # Glossary\n    Common terms and their definitions used in your business context.\n```\n\n#### Data Warehouse Introduction\n\nYou can provide a brief introduction of your data warehouse in `example/bi.yaml`, for example:\n\n```yaml\ndata_warehouse_introduction: |\n  # Data Warehouse Introduction\n    This data warehouse is built on Presto and contains various tables related to XXXXX.\n    The main fact tables include XXXX metrics, while dimension tables include XXXXX.\n    The data is updated hourly and is used for reporting and analysis purposes.\n```\n\n#### Table Selection Rules\n\nYou can configure table selection rules in `example/bi.yaml`, for example:\n\n```yaml\ntable_selection_extra_rule: |\n  - All tables with is_valid can support both valid and invalid traffics\n```\n\n#### Custom SQL Rules\n\nYou can define your additional SQL Generation rules for tables in `example/table_info.yaml`, for example:\n\n```yaml\nsql_rule: |\n  ### SQL Rules\n  - All event_date in the table are stored in **UTC**. If the user specifies a timezone (e.g., CET, PST), convert between timezones accordingly.\n\n```\n\n\n### Catalog Management\n\n#### Introduction\n\nHigh-quality catalog data is essential for accurate Text2SQL generation and data analysis. OpenChatBI automatically \ndiscovers and indexes data warehouse table structures while providing flexible management for business metadata, column \ndescriptions, and query optimization rules.\n\n#### Catalog Structure\n\nThe catalog system organizes metadata in a hierarchical structure:\n\n**Database Level**\n- Top-level container for all tables and schemas\n\n**Table Level**\n- `description`: Business functionality and purpose of the table\n- `selection_rule`: Guidelines for when and how to use this table in queries\n- `sql_rule`: Specific SQL generation rules and constraints for this table\n\n**Column Level**\n- **Required Fields**: Essential metadata for each column to enable effective Text2SQL generation\n  - `column_name`: Technical database column name\n  - `display_name`: Human-readable name for business users\n  - `alias`: Alternative names or abbreviations\n  - `type`: Data type (string, integer, date, etc.)\n  - `category`: Business category, dimension or metric\n  - `tag`: Additional labels for filtering and organization\n  - `description`: Detailed explanation of column purpose and usage\n- **Two Types** of Columns\n  - **Common Columns**: Columns with standardized business meanings shared across tables\n  - **Table-Specific Columns**: Columns with context-dependent meanings that vary between tables\n- **Derived Metrics**: Virtual metrics calculated from existing columns using SQL formulas\n  - Computed dynamically during query execution rather than stored as physical columns\n  - Examples: CTR (clicks/impressions), conversion rates, profit margins\n  - Enable complex business calculations without pre-computing values\n  \n#### Loading Catalog from Database\n\nOpenChatBI can automatically discover and load table structures from your data warehouse:\n\n1. **Automatic Discovery**: Connects to your configured data warehouse and scans table schemas\n2. **Metadata Extraction**: Extracts column names, data types, and basic structural information\n3. **Incremental Updates**: Supports updating catalog data as your database schema evolves\n\nConfigure automatic catalog loading in your `config.yaml`:\n\n```yaml\ncatalog_store:\n  store_type: file_system\n  data_path: ./catalog_data\ndata_warehouse_config:\n  include_tables:\n    - your_table_pattern\n  # Leave empty to include all accessible tables\n```\n\n#### File System Catalog Store\n\nThe file system catalog store organizes metadata across multiple files for maintainability and version control:\n\n**Core Table Information**\n- `table_info.yaml`: Comprehensive table metadata organized hierarchically (database \u2192 table \u2192 information)\n  - `type`: Table classification (e.g., \"fact\" for Fact Tables, \"dimension\" for Dimension Tables)\n  - `description`: Business functionality and purpose\n  - `selection_rule`: Usage guidelines in markdown list format (each line starts with `-`)\n  - `sql_rule`: SQL generation rules in markdown header format (each rule starts with `####`)\n  - `derived_metric`: Virtual metrics with calculation formulas, organized by groups:\n    ```md\n    #### Derived Ratio Metrics\n    Click-through Rate (alias CTR): SUM(clicks) / SUM(impression)\n    Conversion Rate (alias CVR): SUM(conversions) / SUM(clicks)\n    ```\n\n**Column Management**\n- `table_columns.csv`: Basic column registry with schema `db_name,table_name,column_name`\n- `table_spec_columns.csv`: Table-specific column metadata with full schema:\n  `db_name,table_name,column_name,display_name,alias,type,category,tag,description`\n- `common_columns.csv`: Shared column definitions across tables with schema:\n  `column_name,display_name,alias,type,category,tag,description`\n\n**Query Examples and Training Data**\n- `table_selection_example.csv`: Table selection training examples with schema `question,selected_tables`\n- `sql_example.yaml`: Query examples organized by database and table structure:\n  ```yaml\n  your_database:\n    ad_performance: |\n      Q: Show me CTR trends for the past 7 days\n      A: SELECT date, SUM(clicks)/SUM(impressions) AS ctr\n         FROM ad_performance\n         WHERE date >= CURRENT_DATE - INTERVAL 7 DAY\n         GROUP BY date\n         ORDER BY date;\n  ```\n\n\n### Python Code Execution Configuration\n\nOpenChatBI supports multiple execution environments for running Python code with different security and performance characteristics:\n\n```yaml\n# Python Code Execution Configuration\npython_executor: local  # Options: \"local\", \"restricted_local\", \"docker\"\n```\n\n#### Executor Types\n\n- **`local`** (Default)\n  - **Performance**: Fastest execution\n  - **Security**: Least secure (code runs in current Python process)\n  - **Capabilities**: Full Python capabilities and library access\n  - **Use Case**: Development environments, trusted code execution\n\n- **`restricted_local`**\n  - **Performance**: Moderate execution speed\n  - **Security**: Moderate security with RestrictedPython sandboxing\n  - **Capabilities**: Limited Python features (no imports, file access, etc.)\n  - **Use Case**: Semi-trusted environments with controlled execution\n\n- **`docker`**\n  - **Performance**: Slower due to container overhead\n  - **Security**: Highest security with complete process isolation\n  - **Capabilities**: Full Python capabilities within isolated container\n  - **Use Case**: Production environments, untrusted code execution\n  - **Requirements**: Docker must be installed and running\n\n#### Docker Executor Setup\n\nFor production deployments or when running untrusted code, the Docker executor provides complete isolation:\n\n1. **Install Docker**: Download and install Docker Desktop or Docker Engine\n2. **Configure executor**: Set `python_executor: docker` in your config\n3. **Automatic setup**: OpenChatBI will automatically build the required Docker image\n4. **Fallback behavior**: If Docker is unavailable, automatically falls back to local executor\n\n**Docker Executor Features**:\n- Pre-installed data science libraries (pandas, numpy, matplotlib, seaborn)\n- Network isolation for security\n- Automatic container cleanup\n- Resource isolation from host system\n\n\n## Development & Testing\n\n### Code Quality Tools\n\nThe project uses modern Python tooling for code quality:\n\n```bash\n# Format code\nuv run black .\n\n# Lint code  \nuv run ruff check .\n\n# Type checking\nuv run mypy openchatbi/\n\n# Security scanning\nuv run bandit -r openchatbi/\n```\n\n### Testing\n\nRun the test suite:\n\n```bash\n# Run all tests\nuv run pytest\n\n# Run with coverage\nuv run pytest --cov=openchatbi --cov-report=html\n\n# Run specific test files\nuv run pytest test/test_generate_sql.py\nuv run pytest test/test_agent_graph.py\n```\n\n### Pre-commit Hooks\n\nInstall pre-commit hooks for automatic code quality checks:\n\n```bash\nuv run pre-commit install\n```\n\n## Contribution Guidelines\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/fooBar`)\n3. Commit your changes (`git commit -am 'Add some fooBar'`)\n4. Push to the branch (`git push origin feature/fooBar`)\n5. Create a new Pull Request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details\n\n## Contact & Support\n\n- **Author**: Yu Zhong ([zhongyu8@gmail.com](mailto:zhongyu8@gmail.com))\n- **Repository**: [github.com/zhongyu09/openchatbi](https://github.com/zhongyu09/openchatbi)\n- **Issues**: [Report bugs and feature requests](https://github.com/zhongyu09/openchatbi/issues)",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "OpenChatBI - Natural language business intelligence powered by LLMs for intuitive data analysis and SQL generation",
    "version": "0.1.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/zhongyu09/openchatbi/issues",
        "Documentation": "https://github.com/zhongyu09/openchatbi/tree/main",
        "Homepage": "https://github.com/zhongyu09/openchatbi",
        "Repository": "https://github.com/zhongyu09/openchatbi"
    },
    "split_keywords": [
        "agent",
        " ai",
        " analytics",
        " analyze data",
        " bi",
        " business intelligence",
        " conversational ai",
        " data agent",
        " database",
        " gpt",
        " langchain",
        " langgraph",
        " llm",
        " machine learning",
        " natural language",
        " nlp",
        " query data",
        " talk to data",
        " text2sql"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ebb5908bd7e9fe2d5c3f89ac294506d1d3664d4539e1bee016e434dba25c020d",
                "md5": "502ce36288403e540a43dd287cdbc491",
                "sha256": "9a0d721eb9bdae16c6e6dc2437695b1fd45868149bdac67867a6e206520ac9eb"
            },
            "downloads": -1,
            "filename": "openchatbi-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "502ce36288403e540a43dd287cdbc491",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 88174,
            "upload_time": "2025-10-09T10:23:56",
            "upload_time_iso_8601": "2025-10-09T10:23:56.734886Z",
            "url": "https://files.pythonhosted.org/packages/eb/b5/908bd7e9fe2d5c3f89ac294506d1d3664d4539e1bee016e434dba25c020d/openchatbi-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "77e5f7272bcb78a288aca158052c5cb60906916ea0732b8f8e37e8f6def0d964",
                "md5": "3b88797d5efbf4b097034c98a4ab3d68",
                "sha256": "71f4a07a187b0cdffea60eb6ec2fc669ba19ef4f5c7884ad170bfcd37b1bdcf8"
            },
            "downloads": -1,
            "filename": "openchatbi-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "3b88797d5efbf4b097034c98a4ab3d68",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 120382,
            "upload_time": "2025-10-09T10:23:58",
            "upload_time_iso_8601": "2025-10-09T10:23:58.339941Z",
            "url": "https://files.pythonhosted.org/packages/77/e5/f7272bcb78a288aca158052c5cb60906916ea0732b8f8e37e8f6def0d964/openchatbi-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-09 10:23:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zhongyu09",
    "github_project": "openchatbi",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "openchatbi"
}
        
Elapsed time: 2.35842s