# Databricks ML Bundle CLI
A CLI tool to generate Databricks ML platform project structures with governance and best practices.
## Installation
```bash
pip install databricks-ml-bundle
```
## Usage
Generate a new ML project:
```bash
databricks-ml-init --name my-ml-project --workspace-host https://your-workspace.cloud.databricks.com --model-type segmentation --use-gpu
```
Or use the short command:
```bash
dml-init -n my-ml-project -w https://your-workspace.cloud.databricks.com -m classification
```
### Options
- `--name, -n`: Project name (required)
- `--output-dir, -o`: Output directory (default: current directory)
- `--workspace-host, -w`: Databricks workspace URL (required)
- `--model-type, -m`: Model type - classification, regression, segmentation, nlp, custom (default: custom)
- `--use-gpu`: Enable GPU configuration for training
## Generated Structure
The CLI generates a complete ML platform project with:
- **Multi-environment support** (dev/stg/prod)
- **Unity Catalog integration**
- **MLflow experiment tracking**
- **Quality gates and approvals**
- **CI/CD pipeline with GitHub Actions**
- **Cluster policies and security**
- **Modular Python package structure**
## Example
```bash
# Generate a computer vision project
databricks-ml-init \
--name vista-segmentation \
--workspace-host https://my-workspace.cloud.databricks.com \
--model-type segmentation \
--use-gpu
# Navigate to project
cd vista-segmentation
# Install dependencies
pip install -r requirements.txt
# Deploy to Databricks
databricks bundle validate --target dev
databricks bundle deploy --target dev
```
## Features
- ✅ **Governance-first**: Built-in security, permissions, and audit trails
- ✅ **Multi-environment**: Separate dev/staging/production environments
- ✅ **Model-specific**: Templates optimized for different ML use cases
- ✅ **Production-ready**: Includes serving endpoints, monitoring, and CI/CD
- ✅ **Unity Catalog**: Full integration with Databricks governance platform
## Development
```bash
# Clone repository
git clone https://github.com/yourusername/databricks-ml-bundle-cli
cd databricks-ml-bundle-cli
# Install with Poetry
poetry install
# Run locally
poetry run databricks-ml-init --help
```
## Publishing to PyPI
```bash
# Build package
poetry build
# Publish to PyPI
poetry publish
```
## License
MIT License
Raw data
{
"_id": null,
"home_page": "https://github.com/AyushSinghRawat-hub/databricks-ml-bundle-cli",
"name": "databricks-ml-bundle",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0.0,>=3.8.1",
"maintainer_email": null,
"keywords": "databricks, mlops, ml-platform, template, cli",
"author": "Ayush Singh Rawat",
"author_email": "ayushreawat4583@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/ff/a4/514349839f96fa78c8401f851a740f141bd9e25ed590b72c129d0cccf4d2/databricks_ml_bundle-0.1.3.tar.gz",
"platform": null,
"description": "# Databricks ML Bundle CLI\n\nA CLI tool to generate Databricks ML platform project structures with governance and best practices.\n\n## Installation\n\n```bash\npip install databricks-ml-bundle\n```\n\n## Usage\n\nGenerate a new ML project:\n\n```bash\ndatabricks-ml-init --name my-ml-project --workspace-host https://your-workspace.cloud.databricks.com --model-type segmentation --use-gpu\n```\n\nOr use the short command:\n\n```bash\ndml-init -n my-ml-project -w https://your-workspace.cloud.databricks.com -m classification\n```\n\n### Options\n\n- `--name, -n`: Project name (required)\n- `--output-dir, -o`: Output directory (default: current directory)\n- `--workspace-host, -w`: Databricks workspace URL (required)\n- `--model-type, -m`: Model type - classification, regression, segmentation, nlp, custom (default: custom)\n- `--use-gpu`: Enable GPU configuration for training\n\n## Generated Structure\n\nThe CLI generates a complete ML platform project with:\n\n- **Multi-environment support** (dev/stg/prod)\n- **Unity Catalog integration**\n- **MLflow experiment tracking**\n- **Quality gates and approvals**\n- **CI/CD pipeline with GitHub Actions**\n- **Cluster policies and security**\n- **Modular Python package structure**\n\n## Example\n\n```bash\n# Generate a computer vision project\ndatabricks-ml-init \\\n --name vista-segmentation \\\n --workspace-host https://my-workspace.cloud.databricks.com \\\n --model-type segmentation \\\n --use-gpu\n\n# Navigate to project\ncd vista-segmentation\n\n# Install dependencies\npip install -r requirements.txt\n\n# Deploy to Databricks\ndatabricks bundle validate --target dev\ndatabricks bundle deploy --target dev\n```\n\n## Features\n\n- \u2705 **Governance-first**: Built-in security, permissions, and audit trails\n- \u2705 **Multi-environment**: Separate dev/staging/production environments\n- \u2705 **Model-specific**: Templates optimized for different ML use cases\n- \u2705 **Production-ready**: Includes serving endpoints, monitoring, and CI/CD\n- \u2705 **Unity Catalog**: Full integration with Databricks governance platform\n\n## Development\n\n```bash\n# Clone repository\ngit clone https://github.com/yourusername/databricks-ml-bundle-cli\ncd databricks-ml-bundle-cli\n\n# Install with Poetry\npoetry install\n\n# Run locally\npoetry run databricks-ml-init --help\n```\n\n## Publishing to PyPI\n\n```bash\n# Build package\npoetry build\n\n# Publish to PyPI\npoetry publish\n```\n\n## License\n\nMIT License\n",
"bugtrack_url": null,
"license": null,
"summary": "CLI tool to generate Databricks ML platform project structure with governance and best practices",
"version": "0.1.3",
"project_urls": {
"Documentation": "https://github.com/AyushSinghRawat-hub/databricks-ml-bundle-cli",
"Homepage": "https://github.com/AyushSinghRawat-hub/databricks-ml-bundle-cli",
"Repository": "https://github.com/AyushSinghRawat-hub/databricks-ml-bundle-cli"
},
"split_keywords": [
"databricks",
" mlops",
" ml-platform",
" template",
" cli"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ba11fe1f72c08dd8985a10426fcf545962a1987ecbc88abae51a61287100a20d",
"md5": "ca161700e0b1071bf8bd648ff197e3c0",
"sha256": "288bcd34348399a8bef9aeb30fc6adc38ea09f51c6f03df881336c351880b815"
},
"downloads": -1,
"filename": "databricks_ml_bundle-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ca161700e0b1071bf8bd648ff197e3c0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.8.1",
"size": 17761,
"upload_time": "2025-09-04T07:44:53",
"upload_time_iso_8601": "2025-09-04T07:44:53.541116Z",
"url": "https://files.pythonhosted.org/packages/ba/11/fe1f72c08dd8985a10426fcf545962a1987ecbc88abae51a61287100a20d/databricks_ml_bundle-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ffa4514349839f96fa78c8401f851a740f141bd9e25ed590b72c129d0cccf4d2",
"md5": "9c52d0a312d2c95e8f2963650bf249ac",
"sha256": "fe5de1be684b09f7d52859113a54209ba54d9631b57cc8ba0af18e61c4f74ee3"
},
"downloads": -1,
"filename": "databricks_ml_bundle-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "9c52d0a312d2c95e8f2963650bf249ac",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.8.1",
"size": 11509,
"upload_time": "2025-09-04T07:44:54",
"upload_time_iso_8601": "2025-09-04T07:44:54.551475Z",
"url": "https://files.pythonhosted.org/packages/ff/a4/514349839f96fa78c8401f851a740f141bd9e25ed590b72c129d0cccf4d2/databricks_ml_bundle-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-04 07:44:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AyushSinghRawat-hub",
"github_project": "databricks-ml-bundle-cli",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "databricks-ml-bundle"
}