<div align="center">
<h1 style="font-size: 4em; margin-bottom: 0;">d<span style="color: #FF8C00;">oc</span>bt</h1>
<h3 style="margin-top: 0;">Documentation Build Tool</h3>
[](https://github.com/aleenprd/docbt/actions/workflows/ci.yml)
[](https://pypi.org/project/docbt/)
[](https://www.python.org/downloads/)

[](LICENSE)






> Generate YAML documentation for DBT models with optional AI assistance. Built with Streamlit for an intuitive and familiar web interface.
</div>
## 📖 Why **d<span style="color: #FF8C00;">oc</span>bt**
**d<span style="color: #FF8C00;">oc</span>bt** (Doc Build Tool) is utility designed to streamline [dbt](https://www.getdbt.com/product/what-is-dbt) (Data Build Tool) documentation workflows. Connect your data and generate professional YAML documentation ready for your DBT projects. Do this using the assistance provided by the UI and even chat with AI models to 100x your productivity!
### 👔 Target Audience
- **Analytics Engineers**: streamline your dbt workflow and maintain consistent data modelling.
- **Data Engineers**: ensure data quality across your infrastructure through thorough testing.
- **Data Managers**: automate tedious tasks and help your team focus on delivering value.
- **AI Enthusiasts**: Experiment with local LLMs or cloud providers for automation tasks.
### ✨ Key Features
- **🛠️ Non-AI Support**: Generate documentation without requiring AI models.
- **🤖 Multiple LLM Providers**: Choose from [OpenAI's GPT models](https://openai.com/api/), local [Ollama](https://ollama.com/), or [LM Studio.](https://lmstudio.ai/)
- **💬 Interactive Chat**: Ask questions about your data and get specific recommendations.
- **🔧 Developer Mode**: Token metrics, response times, parameters, prompts and debugging information.
- **⚙️ Advanced Configuration**: Fine-tune generation parameters.
- **🧠 Chain of Thought**: View AI reasoning process (when available).
- **📈 Real-time Metrics**: Monitor API usage, token consumption, and performance.
- **🔌 Multiple Data Sources**: Connect to [Snowflake](https://www.snowflake.com/en/), [BigQuery](https://cloud.google.com/bigquery?hl=en), and more for seamless data integration.
### ⏳ More to come
- **More Tests Coverage**: automation of [dbt utils](https://github.com/dbt-labs/dbt-utils), [dbt expectations](https://hub.getdbt.com/metaplane/dbt_expectations/latest/) and [dbt-data-reliability](https://github.com/elementary-data/dbt-data-reliability) packages.
- **Sources**: use docbt to automate [source](https://docs.getdbt.com/reference/source-configs) declaration and documentation.
- **Extra LM providers**: use [Gemini](https://gemini.google.com/), [Grok](https://grok.com/), [Claude](https://claude.ai/l) and others to streamline your work.
- **Extra Data Sources**: connect to [Databricks](https://www.databricks.com/), [PostgreSQL](https://www.postgresql.org/), [Redshift](https://aws.amazon.com/redshift/) and others.
- **One-click analytics**: gain critical insights into your data to better assign tests.
## � Contents
- [📖 Why **docbt**](#-why-docbt)
- [👔 Target Audience](#-target-audience)
- [✨ Key Features](#-key-features)
- [⏳ More to come](#-more-to-come)
- [🚀 Quick Start](#-quick-start)
- [Prerequisites](#prerequisites)
- [📦 Installation](#-installation)
- [🔧 Building from Source](#-building-from-source)
- [🎯 Usage](#-usage)
- [Data Tab](#data-tab)
- [Node Tab](#node-tab)
- [Columns Tab](#columns-tab)
- [Sidebar and Config Tab](#sidebar-and-config-tab)
- [AI Tab](#ai-tab)
- [Chat Tab](#chat-tab)
- [🔧 Configuration Overview](#-configuration-overview)
- [Logging Level](#logging-configuration)
- [LLM Providers](#llm-providers)
- [Advanced Parameters](#advanced-parameters)
- [🗄️ Data Providers](#️-data-providers)
- [🐛 Troubleshooting](#-troubleshooting)
- [📝 License](#-license)
- [🙏 Acknowledgments](#-acknowledgments)
- [📬 Support](#-support)
- [🤝 Contributing](#-contributing)
- [💰 Sponsoring](#-sponsoring)
## �🚀 Quick Start
### Prerequisites
- Python 3.10 or higher
- [uv](https://docs.astral.sh/uv/) (<i>recommended</i>). [poetry](https://python-poetry.org/) or good old [pip](https://pypi.org/project/pip/) for package management
- Optional: [Ollama](https://ollama.com/download), [LM Studio](https://lmstudio.ai/download), or [OpenAI API](https://openai.com/api/) key for AI assistance
- Optional: [Docker](https://www.docker.com/), [Docker Compose](https://docs.docker.com/compose/) for containerized deployment
### 📦 Installation
We recommend always isolating your code within a [virtual environment](https://www.w3schools.com/python/python_virtualenv.asp) and installing the package in it to avoid dependency issues.
#### Using uv
```bash
# Create a virtual enfironment
uv venv
# Activate your virtual environment
source .venv/bin/activate
# Install package version of your choice
uv add docbt # For base package with no data platform
uv add "docbt[snowflake]" # For adding Snowflake provider
uv add "docbt[bigquery]" # For adding BigQuery provider
uv add "docbt[all-providers]" # For adding all available data providers
uv add "docbt[dev]" # For development
# (alternatively) use uv pip
uv pip install docbt
# Verify installation
docbt --version
# Run the application
docbt run
```
<details>
<summary><b>Using Poetry</b></summary>
```bash
# Initialize or navigate to your project
# If you don't have a pyproject.toml yet
poetry init
# Add docbt to your project
poetry add docbt # For base package with no data platform
poetry add "docbt[snowflake]" # For adding Snowflake provider
poetry add "docbt[bigquery]" # For adding BigQuery provider
poetry add "docbt[all-providers]" # For adding all available data providers
# Development dependencies (optional)
poetry add --group dev "docbt[dev]"
# Activate the Poetry shell
poetry shell
# Verify installation
docbt --version
# Run the application
docbt run
```
</details>
<details>
<summary><b>Using pip</b></summary>
```bash
# Create virtual environments
python -m venv env
# Activate it
source env/bin/activate
# Install package version of your choice
pip install docbt # For base package with no data platform
pip install "docbt[snowflake]" # For adding Snowflake provider
pip install "docbt[bigquery]" # For adding BigQuery provider
pip install "docbt[all-providers]" # For adding all available data providers
pip install "docbt[dev]" # For development
# Verify installation
docbt --version
# Run the application
docbt run
```
</details>
### 🔧 Building from Source
Building from source gives you access to the latest development features and allows you to contribute to the project. We recommend using [uv](https://docs.astral.sh/uv/) for faster dependency resolution and installation. This is also what we, the developers, use.
<details>
<summary><b>Using uv (Recommended)</b></summary>
```bash
# Clone the repository
git clone https://github.com/aleenprd/docbt.git
cd docbt
# Create and activate a virtual environment
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install in editable mode with all dependencies
uv pip install -e . # Base installation
uv pip install -e ".[snowflake]" # With Snowflake support
uv pip install -e ".[bigquery]" # With BigQuery support
uv pip install -e ".[all-providers]" # With all data providers
uv pip install -e ".[dev]" # With development tools
# Verify installation
docbt --version
# Run the application
docbt run
```
</details>
<details>
<summary><b>Using pip</b></summary>
```bash
# Clone the repository
git clone https://github.com/aleenprd/docbt.git
cd docbt
# Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Upgrade pip
pip install --upgrade pip
# Install in editable mode
pip install -e . # Base installation
pip install -e ".[snowflake]" # With Snowflake support
pip install -e ".[bigquery]" # With BigQuery support
pip install -e ".[all-providers]" # With all data providers
pip install -e ".[dev]" # With development tools
# Verify installation
docbt --version
# Run the application
docbt run
```
</details>
<details>
<summary><b>Using Poetry</b></summary>
```bash
# Clone the repository
git clone https://github.com/aleenprd/docbt.git
cd docbt
# Install dependencies
poetry install
# Install with extras
poetry install --extras "snowflake bigquery"
# Activate the virtual environment
poetry shell
# Run the application
docbt run
```
</details>
<details>
<summary><b>Using Pipenv</b></summary>
```bash
# Clone the repository
git clone https://github.com/aleenprd/docbt.git
cd docbt
# Install dependencies
pipenv install --dev
# Activate the virtual environment
pipenv shell
# Install in editable mode
pip install -e .
# Run the application
docbt run
```
</details>
<details>
<summary><b>Development Setup</b></summary>
For contributors and developers:
```bash
# Clone and navigate to the repository
git clone https://github.com/aleenprd/docbt.git
cd docbt
# Install with development dependencies (using uv)
uv venv
source .venv/bin/activate
uv pip install -e ".[dev]"
# Install pre-commit hooks (optional but recommended)
pre-commit install
# Run tests
make test
# Run linting and formatting
make lint
make format
# Check code quality
ruff check .
ruff format .
# Run specific test files
pytest tests/server/test_server.py -v
```
</details>
<details>
<summary><b>Verifying Your Installation</b></summary>
After building from source, verify everything works:
```bash
# Check version
docbt --version
# View help
docbt help
# Run the server
docbt run
# Run with custom settings
docbt run --port 8080 --log-level DEBUG
```
</details>
<details>
<summary><b>Using Make (Recommended for Contributors)</b></summary>
If you're contributing to the project, using Make provides the easiest setup experience with automated tasks.
**Prerequisites:**
- Make (usually pre-installed on Linux/macOS)
- Git
```bash
# Clone the repository
git clone https://github.com/aleenprd/docbt.git
cd docbt
# Create virtual environment (Make will use uv automatically)
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install all dependencies with one command
make install
# Create .env file from template (keeps section headers, removes comments)
make env
# Edit .env with your credentials
nano .env # or your preferred editor
# Install pre-commit hooks (optional but recommended)
make pre-commit
# Verify installation by running tests
make test
# Run the application
docbt run
```
**Common Make commands for development:**
```bash
make help # Show all available commands
make install # Install dependencies
make env # Create .env from .env.example
make test # Run tests
make test-cov # Run tests with coverage report
make lint # Check code quality
make format # Auto-format code
make check # Run format check + lint
make ci # Run all CI checks (format, lint, test)
make pre-commit # Install pre-commit hooks
```
**For detailed information on all Make commands, see [Make Commands Guide](docs/MAKE.md).**
</details>
<details>
<summary><b>Troubleshooting Build Issues</b></summary>
**Missing Build Tools:**
```bash
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install python3-dev build-essential
# macOS (requires Homebrew)
brew install python@3.10
# Windows (requires Visual Studio Build Tools)
# Download from: https://visualstudio.microsoft.com/downloads/
```
**Dependency Conflicts:**
```bash
# Clear pip cache
pip cache purge
# Or with uv
uv cache clean
# Reinstall from scratch
rm -rf .venv
uv venv
source .venv/bin/activate
uv pip install -e ".[dev]"
```
**Permission Issues:**
```bash
# Don't use sudo with pip/uv in virtual environments
# If you get permission errors, ensure you're in an activated venv
source .venv/bin/activate
```
</details>
## 🎯 Usage
[**View live demo app**](https://docbt-demo.streamlit.app/)

**d<span style="color: #FF8C00;">oc</span>bt** comes equipped with a command line tool which supports the commands:
- **--version**: prints the version of the package.
- **help**: will print very detailed information about commands and options you can use to run the app.
- **run**: run the Streamlit app with the option to specify host, port, log level.
### Data Tab
Provide the app with data to start working with it
- **Upload**: CSV, JSON from your local storage
- **Data Warehouse**: connect to your data platform like Snowflake or BigQuery
- **Context Integration**: Data automatically included in AI conversations
- **Statistics and EDA**: (coming soon)

### Node Tab
Here you can set up the configuration for your node
- **Provide specific config**: customize your config with platform-specific properties
- **Configure node properties**: from materialization to meta-tags
- **Apply node-level data tests**: (coming soon)

### Columns Tab
Here you can set up the configuration, documentation and tests for your columns

### Sidebar and Config Tab
See the end result of your work in real time
- **Preview Configuration**: Interactive visual representation of generated YAML
- **Real-time Updates**: see changes live as you configure your documentation using the UI
- **AI Suggestions**: use LLMs to generate node and column level descriptions, suggest constraints and data tets

### AI Tab
Configure your AI provider and settings
- **Choose Provider**: OpenAI, Ollama, or LM Studio
- **Developer Mode**: Enable advanced settings and metrics
- **System Prompt**: Customize AI context and behavior (developer mode)
- **Generation Parameters**: Control temperature, max tokens, top-p, stop sequences, etc.

### Chat Tab
Interact with your AI assistant with in-context data sample
- Ask questions about DBT best practices or your data in general
- Get recommendations for data modeling and data use cases
- Just have whatever type of conversation you want with your model
- Enable "Chain of Thought" to see AI reasoning

## 🔧 Configuration Overview
The behavior of the app can be configured through usage of environment variables. You can find an example [environment](.env.example) in the repo. Usage of `make env` (for developers) will also spawn your own `.env` file to work with. Alternatively, copy the `.env.example` contents into `.env` to make use of docbt's [python-dotenv](https://pypi.org/project/python-dotenv/) feature. Or just export the environment variables/inject them into your environment of choice.
### Logging Configuration
Control the verbosity of docbt's logging output to help with debugging or reduce noise in production.
**Setting Log Level:**
You can configure the logging level in two ways:
1. **CLI Flag** (highest priority):
```bash
docbt run --log-level DEBUG
```
2. **Environment Variable** (used if no CLI flag provided):
```bash
# In .env file
DOCBT_LOG_LEVEL=DEBUG
# Or export directly
export DOCBT_LOG_LEVEL=DEBUG
```
**Available Log Levels:**
- `TRACE` - Most verbose, includes all internal details
- `DEBUG` - Detailed debugging information (useful for troubleshooting)
- `INFO` - General informational messages (default)
- `SUCCESS` - Success messages only
- `WARNING` - Warning messages and above
- `ERROR` - Error messages and above
- `CRITICAL` - Only critical errors
**Examples:**
```bash
# Use DEBUG level for troubleshooting
docbt run --log-level DEBUG
# Use environment variable for persistent configuration
echo "DOCBT_LOG_LEVEL=DEBUG" >> .env
docbt run
# Reduce logging noise in production
docbt run --log-level WARNING
```
**Note:** The CLI flag always takes precedence over the environment variable. If neither is specified, the default level is `INFO`.
### LLM Providers
```bash
# Enable/disable AI usage
DOCBT_USE_AI_DEFAULT=false
# Enable/disable developer more for advanced features
DOCBT_DEVELOPER_MODE_ENABLED=true
DOCBT_SHOW_CHAIN_OF_THOUGHT=true
# You can choose which provider will appear as your default
DOCBT_LLM_PROVIDER_DEFAULT=openai/ollama/lmstudio
```
#### OpenAI
We recommend working with gpt-5 series but you can use the `Fetch Models` button to use whatever OpenAI has to offer.
- **gpt-5-nano**: good for most tasks and very cheap - fails to produce valid structured output with large sample size or too many cols
- **gpt-5-mini**: handles itself better than nano, worse at long context than gpt-5. Good middle-ground.
- **gpt-5**: the best of the gpt-5 series but the most expensive. Use sparingly.
```bash
# Set your API key
export DOCBT_OPENAI_API_KEY="sk-..."
# Or add to .env file
DOCBT_OPENAI_API_KEY=sk-...
# Enable it in the UI
DOCBT_DISPLAY_LLM_PROVIDER_OPENAI=true
```
#### Ollama (OSS)
We recomment using models such as:
- [Qwen3 series](https://ollama.com/library/qwen3) especially in the 4B to 14B range
```bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull qwen3:4b
# Start server (default: http://localhost:11434)
ollama serve
# Set host and port environment variables
DOCBT_OLLAMA_HOST=localhost
DOCBT_OLLAMA_PORT=11434
# Enable it in the UI
DOCBT_DISPLAY_LLM_PROVIDER_OLLAMA=true
```
#### LM Studio (OSS)
Some models we would recommend are:
- [Qwen3-4b-instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) or the 8B/14B variant
- [Qwen3-4b-thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) or the 8B/14B variant
- [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) if your GPU permits
**Note**: some models are incapable of producing valid structured outputs. For example, oddly enough, gpt-oss cannot. Experiment and find out what works for your usecase and hardware. Increasing context window in LM-Studio can troubleshoot bugs, especially with data that has lots of columns.
1. Download from [lmstudio.ai](https://lmstudio.ai/)
2. Browse models and download the ones you want
3. Enable "Local Server" (default: http://localhost:1234) from UI
```bash
# Set host and port environment variables
DOCBT_LMSTUDIO_HOST=localhost
DOCBT_LMSTUDIO_PORT=1234
# Enable it in the UI
DOCBT_DISPLAY_LLM_PROVIDER_LMSTUDIO=true
```
### Advanced Parameters
In Developer Mode, fine-tune AI generation with [inference parameters](https://medium.com/@rafaelcostadealmeida159/llm-inference-understanding-how-models-generate-responses-until-we-force-hallucination-and-how-836d12a5592e)
- **API Timeout**: amount of seconds until API call fails
- **Max Tokens**: Maximum response length (100-4000)
- **Temperature**: Creativity level (0.0-2.0)
- `0.0`: Deterministic, focused
- `1.0`: Balanced
- `2.0`: More creative, random
- **Top P**: Nucleus sampling (0.0-1.0)
- **Stop Sequences**: Custom stop words/phrases
**Note**: gpt-5 series does not support temperature (always 1), top-p and stop sequences.
### 🗄️ Data Providers
You can use different connection methods to connect to the following data
#### Snowflake
[Connect](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-connect) to Snowflake by means of with password, SSO, MFA or with RSA key.
```bash
# Example: connect with your user and password
DOCBT_SNOWFLAKE_ACCOUNT=your-account-id
DOCBT_SNOWFLAKE_USER=your-username
DOCBT_SNOWFLAKE_PASSWORD=your-password
DOCBT_SNOWFLAKE_WAREHOUSE=your-warehouse
DOCBT_SNOWFLAKE_DATABASE=your-database
DOCBT_SNOWFLAKE_SCHEMA=PUBLIC
DOCBT_SNOWFLAKE_AUTHENTICATOR=snowflake
```
#### BigQuery
Currently, the BigQuery connection only works with credentials JSON method:
- Install [cloud dk](https://cloud.google.com/sdk/docs/install-sdk)
- [Authenticate](https://googleapis.dev/python/google-api-core/latest/auth.html) with JSON credentials
```bash
# Point to your credentials JSON in the environment variables
DOCBT_GOOGLE_APPLICATION_CREDENTIALS=/home/<user>/.config/gcloud/application_default_credentials.json
```
## 🐛 Troubleshooting
### Common Issues
**Streamlit App/General Issues**
Run docbt with debug log level and inspect the logs. If you find any bugs while doing so, please report them. :)
```bash
docbt run --log-level debug
```
**LLM Connection Errors**
```bash
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Verify LM Studio server
curl http://localhost:1234/v1/models
# Test OpenAI API key
curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/models
```
**Docker Issues**
```bash
# View container logs
docker-compose logs docbt
# Check if container is running
docker ps
# Restart container
docker-compose restart docbt
```
See [Docker Guide](docs/DOCKER.md) for more Docker-specific troubleshooting.
## 📝 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Inspired by the [DBT](https://www.getdbt.com/) community
- Built with [Streamlit](https://streamlit.io/)
- AI via [OpenAI](https://openai.com/), [Ollama](https://ollama.ai/), and [LM Studio](https://lmstudio.ai/)
- Data via [Snowflake](https://www.snowflake.com/en/), [BigQuery](https://cloud.google.com/bigquery?hl=en)
## 📬 Support
- 🐛 **Issues**: [GitHub Issues](https://github.com/your-username/docdt/issues)
- 💬 **Discussions**: [GitHub Discussions](https://github.com/your-username/docdt/discussions)
- 📧 **Email**: [predaalin2694@gmail.com](mailto:predaalin2694@gmail.com)
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
**Quick Start:**
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes and add tests
4. Run `ruff format .` and `pytest`
5. Commit your changes (`git commit -m 'feat: add amazing feature'`)
6. Push to the branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
**CI/CD:** All pull requests are automatically tested with our CI pipeline. See [CI/CD Documentation](docs/CICD.md) for details.
**Development Tools:** We use Make for automation. See [Make Commands Guide](docs/MAKE.md) for all available commands.
## 💰 Sponsoring
If you like what I'm working on and decide to sponsor you can do so via:
- [PayPal](https://paypal.me/aleenprd?country.x=DK&locale.x=en_US)
- [Buy me a coffee](https://www.buymeacoffee.com/cristianali)
---
**Happy documenting!** 🎉 Generate better DBT documentation with AI assistance.
Raw data
{
"_id": null,
"home_page": null,
"name": "docbt",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, bigquery, dbt, documentation, llm, ollama, openai, snowflake, sql, streamlit, testing",
"author": null,
"author_email": "Alin Preda <predaalin2694@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/af/b4/d4e8f8c21b74980a43ae484b86d7435231f1c1c2ac1bf8fbecdc203e0eaa/docbt-0.1.3.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n\n<h1 style=\"font-size: 4em; margin-bottom: 0;\">d<span style=\"color: #FF8C00;\">oc</span>bt</h1>\n<h3 style=\"margin-top: 0;\">Documentation Build Tool</h3>\n\n[](https://github.com/aleenprd/docbt/actions/workflows/ci.yml)\n[](https://pypi.org/project/docbt/)\n[](https://www.python.org/downloads/)\n\n[](LICENSE)\n\n\n\n\n\n\n\n\n> Generate YAML documentation for DBT models with optional AI assistance. Built with Streamlit for an intuitive and familiar web interface.\n\n</div>\n\n## \ud83d\udcd6 Why **d<span style=\"color: #FF8C00;\">oc</span>bt**\n**d<span style=\"color: #FF8C00;\">oc</span>bt** (Doc Build Tool) is utility designed to streamline [dbt](https://www.getdbt.com/product/what-is-dbt) (Data Build Tool) documentation workflows. Connect your data and generate professional YAML documentation ready for your DBT projects. Do this using the assistance provided by the UI and even chat with AI models to 100x your productivity!\n\n\n### \ud83d\udc54 Target Audience\n- **Analytics Engineers**: streamline your dbt workflow and maintain consistent data modelling.\n- **Data Engineers**: ensure data quality across your infrastructure through thorough testing.\n- **Data Managers**: automate tedious tasks and help your team focus on delivering value.\n- **AI Enthusiasts**: Experiment with local LLMs or cloud providers for automation tasks.\n\n### \u2728 Key Features\n- **\ud83d\udee0\ufe0f Non-AI Support**: Generate documentation without requiring AI models.\n- **\ud83e\udd16 Multiple LLM Providers**: Choose from [OpenAI's GPT models](https://openai.com/api/), local [Ollama](https://ollama.com/), or [LM Studio.](https://lmstudio.ai/)\n- **\ud83d\udcac Interactive Chat**: Ask questions about your data and get specific recommendations.\n- **\ud83d\udd27 Developer Mode**: Token metrics, response times, parameters, prompts and debugging information.\n- **\u2699\ufe0f Advanced Configuration**: Fine-tune generation parameters.\n- **\ud83e\udde0 Chain of Thought**: View AI reasoning process (when available).\n- **\ud83d\udcc8 Real-time Metrics**: Monitor API usage, token consumption, and performance.\n- **\ud83d\udd0c Multiple Data Sources**: Connect to [Snowflake](https://www.snowflake.com/en/), [BigQuery](https://cloud.google.com/bigquery?hl=en), and more for seamless data integration.\n\n### \u23f3 More to come\n- **More Tests Coverage**: automation of [dbt utils](https://github.com/dbt-labs/dbt-utils), [dbt expectations](https://hub.getdbt.com/metaplane/dbt_expectations/latest/) and [dbt-data-reliability](https://github.com/elementary-data/dbt-data-reliability) packages.\n- **Sources**: use docbt to automate [source](https://docs.getdbt.com/reference/source-configs) declaration and documentation.\n- **Extra LM providers**: use [Gemini](https://gemini.google.com/), [Grok](https://grok.com/), [Claude](https://claude.ai/l) and others to streamline your work.\n- **Extra Data Sources**: connect to [Databricks](https://www.databricks.com/), [PostgreSQL](https://www.postgresql.org/), [Redshift](https://aws.amazon.com/redshift/) and others.\n- **One-click analytics**: gain critical insights into your data to better assign tests.\n\n## \ufffd Contents\n\n- [\ud83d\udcd6 Why **docbt**](#-why-docbt)\n - [\ud83d\udc54 Target Audience](#-target-audience)\n - [\u2728 Key Features](#-key-features)\n - [\u23f3 More to come](#-more-to-come)\n- [\ud83d\ude80 Quick Start](#-quick-start)\n - [Prerequisites](#prerequisites)\n - [\ud83d\udce6 Installation](#-installation)\n - [\ud83d\udd27 Building from Source](#-building-from-source)\n- [\ud83c\udfaf Usage](#-usage)\n - [Data Tab](#data-tab)\n - [Node Tab](#node-tab)\n - [Columns Tab](#columns-tab)\n - [Sidebar and Config Tab](#sidebar-and-config-tab)\n - [AI Tab](#ai-tab)\n - [Chat Tab](#chat-tab)\n- [\ud83d\udd27 Configuration Overview](#-configuration-overview)\n - [Logging Level](#logging-configuration)\n - [LLM Providers](#llm-providers)\n - [Advanced Parameters](#advanced-parameters)\n - [\ud83d\uddc4\ufe0f Data Providers](#\ufe0f-data-providers)\n- [\ud83d\udc1b Troubleshooting](#-troubleshooting)\n- [\ud83d\udcdd License](#-license)\n- [\ud83d\ude4f Acknowledgments](#-acknowledgments)\n- [\ud83d\udcec Support](#-support)\n- [\ud83e\udd1d Contributing](#-contributing)\n- [\ud83d\udcb0 Sponsoring](#-sponsoring)\n\n## \ufffd\ud83d\ude80 Quick Start\n\n### Prerequisites\n\n- Python 3.10 or higher\n- [uv](https://docs.astral.sh/uv/) (<i>recommended</i>). [poetry](https://python-poetry.org/) or good old [pip](https://pypi.org/project/pip/) for package management\n- Optional: [Ollama](https://ollama.com/download), [LM Studio](https://lmstudio.ai/download), or [OpenAI API](https://openai.com/api/) key for AI assistance\n- Optional: [Docker](https://www.docker.com/), [Docker Compose](https://docs.docker.com/compose/) for containerized deployment\n\n### \ud83d\udce6 Installation\nWe recommend always isolating your code within a [virtual environment](https://www.w3schools.com/python/python_virtualenv.asp) and installing the package in it to avoid dependency issues.\n\n#### Using uv\n```bash\n# Create a virtual enfironment\nuv venv\n\n# Activate your virtual environment\nsource .venv/bin/activate\n\n# Install package version of your choice\nuv add docbt # For base package with no data platform\nuv add \"docbt[snowflake]\" # For adding Snowflake provider\nuv add \"docbt[bigquery]\" # For adding BigQuery provider\nuv add \"docbt[all-providers]\" # For adding all available data providers\nuv add \"docbt[dev]\" # For development\n\n# (alternatively) use uv pip\nuv pip install docbt\n\n# Verify installation\ndocbt --version\n\n# Run the application\ndocbt run\n```\n\n<details>\n<summary><b>Using Poetry</b></summary>\n\n```bash\n# Initialize or navigate to your project\n# If you don't have a pyproject.toml yet\npoetry init\n\n# Add docbt to your project\npoetry add docbt # For base package with no data platform\npoetry add \"docbt[snowflake]\" # For adding Snowflake provider\npoetry add \"docbt[bigquery]\" # For adding BigQuery provider\npoetry add \"docbt[all-providers]\" # For adding all available data providers\n\n# Development dependencies (optional)\npoetry add --group dev \"docbt[dev]\"\n\n# Activate the Poetry shell\npoetry shell\n\n# Verify installation\ndocbt --version\n\n# Run the application\ndocbt run\n```\n</details>\n\n\n<details>\n<summary><b>Using pip</b></summary>\n\n```bash\n# Create virtual environments\npython -m venv env\n\n# Activate it\nsource env/bin/activate\n\n# Install package version of your choice\npip install docbt # For base package with no data platform\npip install \"docbt[snowflake]\" # For adding Snowflake provider\npip install \"docbt[bigquery]\" # For adding BigQuery provider\npip install \"docbt[all-providers]\" # For adding all available data providers\npip install \"docbt[dev]\" # For development\n\n# Verify installation\ndocbt --version\n\n# Run the application\ndocbt run\n```\n</details>\n\n\n### \ud83d\udd27 Building from Source\n\nBuilding from source gives you access to the latest development features and allows you to contribute to the project. We recommend using [uv](https://docs.astral.sh/uv/) for faster dependency resolution and installation. This is also what we, the developers, use.\n\n<details>\n<summary><b>Using uv (Recommended)</b></summary>\n\n```bash\n# Clone the repository\ngit clone https://github.com/aleenprd/docbt.git\ncd docbt\n\n# Create and activate a virtual environment\nuv venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\n\n# Install in editable mode with all dependencies\nuv pip install -e . # Base installation\nuv pip install -e \".[snowflake]\" # With Snowflake support\nuv pip install -e \".[bigquery]\" # With BigQuery support\nuv pip install -e \".[all-providers]\" # With all data providers\nuv pip install -e \".[dev]\" # With development tools\n\n# Verify installation\ndocbt --version\n\n# Run the application\ndocbt run\n```\n\n</details>\n\n<details>\n<summary><b>Using pip</b></summary>\n\n```bash\n# Clone the repository\ngit clone https://github.com/aleenprd/docbt.git\ncd docbt\n\n# Create and activate a virtual environment\npython -m venv .venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\n\n# Upgrade pip\npip install --upgrade pip\n\n# Install in editable mode\npip install -e . # Base installation\npip install -e \".[snowflake]\" # With Snowflake support\npip install -e \".[bigquery]\" # With BigQuery support\npip install -e \".[all-providers]\" # With all data providers\npip install -e \".[dev]\" # With development tools\n\n# Verify installation\ndocbt --version\n\n# Run the application\ndocbt run\n```\n\n</details>\n\n<details>\n<summary><b>Using Poetry</b></summary>\n\n```bash\n# Clone the repository\ngit clone https://github.com/aleenprd/docbt.git\ncd docbt\n\n# Install dependencies\npoetry install\n\n# Install with extras\npoetry install --extras \"snowflake bigquery\"\n\n# Activate the virtual environment\npoetry shell\n\n# Run the application\ndocbt run\n```\n\n</details>\n\n<details>\n<summary><b>Using Pipenv</b></summary>\n\n```bash\n# Clone the repository\ngit clone https://github.com/aleenprd/docbt.git\ncd docbt\n\n# Install dependencies\npipenv install --dev\n\n# Activate the virtual environment\npipenv shell\n\n# Install in editable mode\npip install -e .\n\n# Run the application\ndocbt run\n```\n\n</details>\n\n<details>\n<summary><b>Development Setup</b></summary>\n\nFor contributors and developers:\n\n```bash\n# Clone and navigate to the repository\ngit clone https://github.com/aleenprd/docbt.git\ncd docbt\n\n# Install with development dependencies (using uv)\nuv venv\nsource .venv/bin/activate\nuv pip install -e \".[dev]\"\n\n# Install pre-commit hooks (optional but recommended)\npre-commit install\n\n# Run tests\nmake test\n\n# Run linting and formatting\nmake lint\nmake format\n\n# Check code quality\nruff check .\nruff format .\n\n# Run specific test files\npytest tests/server/test_server.py -v\n```\n\n</details>\n\n<details>\n<summary><b>Verifying Your Installation</b></summary>\n\nAfter building from source, verify everything works:\n\n```bash\n# Check version\ndocbt --version\n\n# View help\ndocbt help\n\n# Run the server\ndocbt run\n\n# Run with custom settings\ndocbt run --port 8080 --log-level DEBUG\n```\n\n</details>\n\n<details>\n<summary><b>Using Make (Recommended for Contributors)</b></summary>\n\nIf you're contributing to the project, using Make provides the easiest setup experience with automated tasks.\n\n**Prerequisites:**\n- Make (usually pre-installed on Linux/macOS)\n- Git\n\n```bash\n# Clone the repository\ngit clone https://github.com/aleenprd/docbt.git\ncd docbt\n\n# Create virtual environment (Make will use uv automatically)\nuv venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\n\n# Install all dependencies with one command\nmake install\n\n# Create .env file from template (keeps section headers, removes comments)\nmake env\n\n# Edit .env with your credentials\nnano .env # or your preferred editor\n\n# Install pre-commit hooks (optional but recommended)\nmake pre-commit\n\n# Verify installation by running tests\nmake test\n\n# Run the application\ndocbt run\n```\n\n**Common Make commands for development:**\n```bash\nmake help # Show all available commands\nmake install # Install dependencies\nmake env # Create .env from .env.example\nmake test # Run tests\nmake test-cov # Run tests with coverage report\nmake lint # Check code quality\nmake format # Auto-format code\nmake check # Run format check + lint\nmake ci # Run all CI checks (format, lint, test)\nmake pre-commit # Install pre-commit hooks\n```\n\n**For detailed information on all Make commands, see [Make Commands Guide](docs/MAKE.md).**\n\n</details>\n\n<details>\n<summary><b>Troubleshooting Build Issues</b></summary>\n\n**Missing Build Tools:**\n```bash\n# Ubuntu/Debian\nsudo apt-get update\nsudo apt-get install python3-dev build-essential\n\n# macOS (requires Homebrew)\nbrew install python@3.10\n\n# Windows (requires Visual Studio Build Tools)\n# Download from: https://visualstudio.microsoft.com/downloads/\n```\n\n**Dependency Conflicts:**\n```bash\n# Clear pip cache\npip cache purge\n\n# Or with uv\nuv cache clean\n\n# Reinstall from scratch\nrm -rf .venv\nuv venv\nsource .venv/bin/activate\nuv pip install -e \".[dev]\"\n```\n\n**Permission Issues:**\n```bash\n# Don't use sudo with pip/uv in virtual environments\n# If you get permission errors, ensure you're in an activated venv\nsource .venv/bin/activate\n```\n\n</details>\n\n\n## \ud83c\udfaf Usage\n[**View live demo app**](https://docbt-demo.streamlit.app/)\n\n\n\n**d<span style=\"color: #FF8C00;\">oc</span>bt** comes equipped with a command line tool which supports the commands:\n- **--version**: prints the version of the package.\n- **help**: will print very detailed information about commands and options you can use to run the app.\n- **run**: run the Streamlit app with the option to specify host, port, log level.\n\n### Data Tab\nProvide the app with data to start working with it\n- **Upload**: CSV, JSON from your local storage\n- **Data Warehouse**: connect to your data platform like Snowflake or BigQuery\n- **Context Integration**: Data automatically included in AI conversations\n- **Statistics and EDA**: (coming soon)\n\n\n\n### Node Tab\nHere you can set up the configuration for your node\n- **Provide specific config**: customize your config with platform-specific properties\n- **Configure node properties**: from materialization to meta-tags\n- **Apply node-level data tests**: (coming soon)\n\n\n\n### Columns Tab\nHere you can set up the configuration, documentation and tests for your columns\n\n\n\n### Sidebar and Config Tab\nSee the end result of your work in real time\n- **Preview Configuration**: Interactive visual representation of generated YAML\n- **Real-time Updates**: see changes live as you configure your documentation using the UI\n- **AI Suggestions**: use LLMs to generate node and column level descriptions, suggest constraints and data tets\n\n\n\n### AI Tab\nConfigure your AI provider and settings\n- **Choose Provider**: OpenAI, Ollama, or LM Studio\n- **Developer Mode**: Enable advanced settings and metrics\n- **System Prompt**: Customize AI context and behavior (developer mode)\n- **Generation Parameters**: Control temperature, max tokens, top-p, stop sequences, etc.\n\n\n\n### Chat Tab\nInteract with your AI assistant with in-context data sample\n- Ask questions about DBT best practices or your data in general\n- Get recommendations for data modeling and data use cases\n- Just have whatever type of conversation you want with your model\n- Enable \"Chain of Thought\" to see AI reasoning\n\n\n\n## \ud83d\udd27 Configuration Overview\nThe behavior of the app can be configured through usage of environment variables. You can find an example [environment](.env.example) in the repo. Usage of `make env` (for developers) will also spawn your own `.env` file to work with. Alternatively, copy the `.env.example` contents into `.env` to make use of docbt's [python-dotenv](https://pypi.org/project/python-dotenv/) feature. Or just export the environment variables/inject them into your environment of choice.\n\n### Logging Configuration\n\nControl the verbosity of docbt's logging output to help with debugging or reduce noise in production.\n\n**Setting Log Level:**\n\nYou can configure the logging level in two ways:\n\n1. **CLI Flag** (highest priority):\n```bash\ndocbt run --log-level DEBUG\n```\n\n2. **Environment Variable** (used if no CLI flag provided):\n```bash\n# In .env file\nDOCBT_LOG_LEVEL=DEBUG\n\n# Or export directly\nexport DOCBT_LOG_LEVEL=DEBUG\n```\n\n**Available Log Levels:**\n- `TRACE` - Most verbose, includes all internal details\n- `DEBUG` - Detailed debugging information (useful for troubleshooting)\n- `INFO` - General informational messages (default)\n- `SUCCESS` - Success messages only\n- `WARNING` - Warning messages and above\n- `ERROR` - Error messages and above\n- `CRITICAL` - Only critical errors\n\n**Examples:**\n```bash\n# Use DEBUG level for troubleshooting\ndocbt run --log-level DEBUG\n\n# Use environment variable for persistent configuration\necho \"DOCBT_LOG_LEVEL=DEBUG\" >> .env\ndocbt run\n\n# Reduce logging noise in production\ndocbt run --log-level WARNING\n```\n\n**Note:** The CLI flag always takes precedence over the environment variable. If neither is specified, the default level is `INFO`.\n\n### LLM Providers\n\n```bash\n# Enable/disable AI usage\nDOCBT_USE_AI_DEFAULT=false\n\n# Enable/disable developer more for advanced features\nDOCBT_DEVELOPER_MODE_ENABLED=true\nDOCBT_SHOW_CHAIN_OF_THOUGHT=true\n\n# You can choose which provider will appear as your default\nDOCBT_LLM_PROVIDER_DEFAULT=openai/ollama/lmstudio\n```\n#### OpenAI\nWe recommend working with gpt-5 series but you can use the `Fetch Models` button to use whatever OpenAI has to offer.\n- **gpt-5-nano**: good for most tasks and very cheap - fails to produce valid structured output with large sample size or too many cols\n- **gpt-5-mini**: handles itself better than nano, worse at long context than gpt-5. Good middle-ground.\n- **gpt-5**: the best of the gpt-5 series but the most expensive. Use sparingly.\n\n```bash\n# Set your API key\nexport DOCBT_OPENAI_API_KEY=\"sk-...\"\n\n# Or add to .env file\nDOCBT_OPENAI_API_KEY=sk-...\n\n# Enable it in the UI\nDOCBT_DISPLAY_LLM_PROVIDER_OPENAI=true\n```\n\n#### Ollama (OSS)\nWe recomment using models such as:\n- [Qwen3 series](https://ollama.com/library/qwen3) especially in the 4B to 14B range\n\n```bash\n# Install Ollama\ncurl -fsSL https://ollama.ai/install.sh | sh\n\n# Pull a model\nollama pull qwen3:4b\n\n# Start server (default: http://localhost:11434)\nollama serve\n\n# Set host and port environment variables\nDOCBT_OLLAMA_HOST=localhost\nDOCBT_OLLAMA_PORT=11434\n\n# Enable it in the UI\nDOCBT_DISPLAY_LLM_PROVIDER_OLLAMA=true\n```\n\n#### LM Studio (OSS)\nSome models we would recommend are:\n- [Qwen3-4b-instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) or the 8B/14B variant\n- [Qwen3-4b-thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) or the 8B/14B variant\n- [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) if your GPU permits\n\n**Note**: some models are incapable of producing valid structured outputs. For example, oddly enough, gpt-oss cannot. Experiment and find out what works for your usecase and hardware. Increasing context window in LM-Studio can troubleshoot bugs, especially with data that has lots of columns.\n\n1. Download from [lmstudio.ai](https://lmstudio.ai/)\n2. Browse models and download the ones you want\n3. Enable \"Local Server\" (default: http://localhost:1234) from UI\n\n```bash\n# Set host and port environment variables\nDOCBT_LMSTUDIO_HOST=localhost\nDOCBT_LMSTUDIO_PORT=1234\n\n# Enable it in the UI\nDOCBT_DISPLAY_LLM_PROVIDER_LMSTUDIO=true\n```\n\n### Advanced Parameters\nIn Developer Mode, fine-tune AI generation with [inference parameters](https://medium.com/@rafaelcostadealmeida159/llm-inference-understanding-how-models-generate-responses-until-we-force-hallucination-and-how-836d12a5592e)\n\n- **API Timeout**: amount of seconds until API call fails\n- **Max Tokens**: Maximum response length (100-4000)\n- **Temperature**: Creativity level (0.0-2.0)\n - `0.0`: Deterministic, focused\n - `1.0`: Balanced\n - `2.0`: More creative, random\n- **Top P**: Nucleus sampling (0.0-1.0)\n- **Stop Sequences**: Custom stop words/phrases\n\n**Note**: gpt-5 series does not support temperature (always 1), top-p and stop sequences.\n\n### \ud83d\uddc4\ufe0f Data Providers\nYou can use different connection methods to connect to the following data\n\n#### Snowflake\n[Connect](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-connect) to Snowflake by means of with password, SSO, MFA or with RSA key.\n\n```bash\n# Example: connect with your user and password\nDOCBT_SNOWFLAKE_ACCOUNT=your-account-id\nDOCBT_SNOWFLAKE_USER=your-username\nDOCBT_SNOWFLAKE_PASSWORD=your-password\nDOCBT_SNOWFLAKE_WAREHOUSE=your-warehouse\nDOCBT_SNOWFLAKE_DATABASE=your-database\nDOCBT_SNOWFLAKE_SCHEMA=PUBLIC\nDOCBT_SNOWFLAKE_AUTHENTICATOR=snowflake\n```\n\n\n#### BigQuery\nCurrently, the BigQuery connection only works with credentials JSON method:\n- Install [cloud dk](https://cloud.google.com/sdk/docs/install-sdk)\n- [Authenticate](https://googleapis.dev/python/google-api-core/latest/auth.html) with JSON credentials\n\n```bash\n# Point to your credentials JSON in the environment variables\nDOCBT_GOOGLE_APPLICATION_CREDENTIALS=/home/<user>/.config/gcloud/application_default_credentials.json\n```\n\n## \ud83d\udc1b Troubleshooting\n\n### Common Issues\n\n**Streamlit App/General Issues**\nRun docbt with debug log level and inspect the logs. If you find any bugs while doing so, please report them. :)\n\n```bash\ndocbt run --log-level debug\n```\n\n**LLM Connection Errors**\n```bash\n# Check if Ollama is running\ncurl http://localhost:11434/api/tags\n\n# Verify LM Studio server\ncurl http://localhost:1234/v1/models\n\n# Test OpenAI API key\ncurl -H \"Authorization: Bearer $OPENAI_API_KEY\" https://api.openai.com/v1/models\n```\n\n**Docker Issues**\n```bash\n# View container logs\ndocker-compose logs docbt\n\n# Check if container is running\ndocker ps\n\n# Restart container\ndocker-compose restart docbt\n```\n\nSee [Docker Guide](docs/DOCKER.md) for more Docker-specific troubleshooting.\n\n## \ud83d\udcdd License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n- Inspired by the [DBT](https://www.getdbt.com/) community\n- Built with [Streamlit](https://streamlit.io/)\n- AI via [OpenAI](https://openai.com/), [Ollama](https://ollama.ai/), and [LM Studio](https://lmstudio.ai/)\n- Data via [Snowflake](https://www.snowflake.com/en/), [BigQuery](https://cloud.google.com/bigquery?hl=en)\n\n## \ud83d\udcec Support\n\n- \ud83d\udc1b **Issues**: [GitHub Issues](https://github.com/your-username/docdt/issues)\n- \ud83d\udcac **Discussions**: [GitHub Discussions](https://github.com/your-username/docdt/discussions)\n- \ud83d\udce7 **Email**: [predaalin2694@gmail.com](mailto:predaalin2694@gmail.com)\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n**Quick Start:**\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Make your changes and add tests\n4. Run `ruff format .` and `pytest`\n5. Commit your changes (`git commit -m 'feat: add amazing feature'`)\n6. Push to the branch (`git push origin feature/amazing-feature`)\n7. Open a Pull Request\n\n**CI/CD:** All pull requests are automatically tested with our CI pipeline. See [CI/CD Documentation](docs/CICD.md) for details.\n\n**Development Tools:** We use Make for automation. See [Make Commands Guide](docs/MAKE.md) for all available commands.\n\n## \ud83d\udcb0 Sponsoring\nIf you like what I'm working on and decide to sponsor you can do so via:\n- [PayPal](https://paypal.me/aleenprd?country.x=DK&locale.x=en_US)\n- [Buy me a coffee](https://www.buymeacoffee.com/cristianali)\n\n---\n\n**Happy documenting!** \ud83c\udf89 Generate better DBT documentation with AI assistance.\n",
"bugtrack_url": null,
"license": null,
"summary": "docdbt (documentation build tool) is a Streamlit app for managing dbt project documentation.",
"version": "0.1.3",
"project_urls": {
"Demo": "https://docbt-demo.streamlit.app/",
"Homepage": "https://github.com/aleenprd/docbt",
"Issues": "https://github.com/aleenprd/docbt/issues",
"Repository": "https://github.com/aleenprd/docbt"
},
"split_keywords": [
"ai",
" bigquery",
" dbt",
" documentation",
" llm",
" ollama",
" openai",
" snowflake",
" sql",
" streamlit",
" testing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1f3807dadcc1030db4297c192449c113163739eab8f8b7d30a7f3692c40f0398",
"md5": "bda4b33ab0644646bd71c4675514898b",
"sha256": "086a9343954c63df5ed29de740ac300a6c11051b70561f79bba730a57af429b1"
},
"downloads": -1,
"filename": "docbt-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bda4b33ab0644646bd71c4675514898b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 60917,
"upload_time": "2025-10-21T18:30:24",
"upload_time_iso_8601": "2025-10-21T18:30:24.744049Z",
"url": "https://files.pythonhosted.org/packages/1f/38/07dadcc1030db4297c192449c113163739eab8f8b7d30a7f3692c40f0398/docbt-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "afb4d4e8f8c21b74980a43ae484b86d7435231f1c1c2ac1bf8fbecdc203e0eaa",
"md5": "5f3b482241d642673ed7ffa7f2a439e9",
"sha256": "507fd32ef72437cd46eabc7ff4d4bf71fe8d9c203a140ef2f158b29b00aeb7e6"
},
"downloads": -1,
"filename": "docbt-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "5f3b482241d642673ed7ffa7f2a439e9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 59678,
"upload_time": "2025-10-21T18:30:28",
"upload_time_iso_8601": "2025-10-21T18:30:28.380998Z",
"url": "https://files.pythonhosted.org/packages/af/b4/d4e8f8c21b74980a43ae484b86d7435231f1c1c2ac1bf8fbecdc203e0eaa/docbt-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-21 18:30:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aleenprd",
"github_project": "docbt",
"github_not_found": true,
"lcname": "docbt"
}