databricks-sdk


Namedatabricks-sdk JSON
Version 0.37.0 PyPI version JSON
download
home_pagehttps://databricks-sdk-py.readthedocs.io
SummaryDatabricks SDK for Python (Beta)
upload_time2024-11-14 12:09:20
maintainerNone
docs_urlNone
authorSerge Smertin
requires_python>=3.7
licenseNone
keywords databricks sdk
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Databricks SDK for Python (Beta)

[![PyPI - Downloads](https://img.shields.io/pypi/dw/databricks-sdk)](https://pypistats.org/packages/databricks-sdk)
[![PyPI - License](https://img.shields.io/pypi/l/databricks-sdk)](https://github.com/databricks/databricks-sdk-py/blob/main/LICENSE)
[![databricks-sdk](https://snyk.io/advisor/python/databricks-sdk/badge.svg)](https://snyk.io/advisor/python/databricks-sdk)
![PyPI](https://img.shields.io/pypi/v/databricks-sdk)
[![codecov](https://codecov.io/gh/databricks/databricks-sdk-py/branch/main/graph/badge.svg?token=GU63K7WDBE)](https://codecov.io/gh/databricks/databricks-sdk-py)
[![lines of code](https://tokei.rs/b1/github/databricks/databricks-sdk-py)]([https://codecov.io/github/databricks/databricks-sdk-py](https://github.com/databricks/databricks-sdk-py))

[Beta](https://docs.databricks.com/release-notes/release-types.html): This SDK is supported for production use cases, 
but we do expect future releases to have some interface changes; see [Interface stability](#interface-stability). 
We are keen to hear feedback from you on these SDKs. Please [file issues](https://github.com/databricks/databricks-sdk-py/issues), and we will address them. 
| See also the [SDK for Java](https://github.com/databricks/databricks-sdk-java) 
| See also the [SDK for Go](https://github.com/databricks/databricks-sdk-go) 
| See also the [Terraform Provider](https://github.com/databricks/terraform-provider-databricks)
| See also cloud-specific docs ([AWS](https://docs.databricks.com/dev-tools/sdk-python.html), 
   [Azure](https://learn.microsoft.com/en-us/azure/databricks/dev-tools/sdk-python), 
   [GCP](https://docs.gcp.databricks.com/dev-tools/sdk-python.html)) 
| See also the [API reference on readthedocs](https://databricks-sdk-py.readthedocs.io/en/latest/)

The Databricks SDK for Python includes functionality to accelerate development with [Python](https://www.python.org/) for the Databricks Lakehouse.
It covers all public [Databricks REST API](https://docs.databricks.com/dev-tools/api/index.html) operations.
The SDK's internal HTTP client is robust and handles failures on different levels by performing intelligent retries.

## Contents

- [Getting started](#getting-started)
- [Code examples](#code-examples)
- [Authentication](#authentication)
- [Long-running operations](#long-running-operations)
- [Paginated responses](#paginated-responses)
- [Single-sign-on with OAuth](#single-sign-on-sso-with-oauth)
- [User Agent Request Attribution](#user-agent-request-attribution)
- [Error handling](#error-handling)
- [Logging](#logging)
- [Integration with `dbutils`](#interaction-with-dbutils)
- [Interface stability](#interface-stability)

## Getting started<a id="getting-started"></a>

1. Please install Databricks SDK for Python via `pip install databricks-sdk` and instantiate `WorkspaceClient`:

```python
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
for c in w.clusters.list():
    print(c.cluster_name)
```

Databricks SDK for Python is compatible with Python 3.7 _(until [June 2023](https://devguide.python.org/versions/))_, 3.8, 3.9, 3.10, and 3.11.  
**Note:** Databricks Runtime starting from version 13.1 includes a bundled version of the Python SDK.  
It is highly recommended to upgrade to the latest version which you can do by running the following in a notebook cell:

```python
%pip install --upgrade databricks-sdk
```
followed by
```python
dbutils.library.restartPython()
```
## Code examples<a id="code-examples"></a>

The Databricks SDK for Python comes with a number of examples demonstrating how to use the library for various common use-cases, including

* [Using the SDK with OAuth from a webserver](https://github.com/databricks/databricks-sdk-py/blob/main/examples/flask_app_with_oauth.py)
* [Using long-running operations](https://github.com/databricks/databricks-sdk-py/blob/main/examples/starting_job_and_waiting.py)
* [Authenticating a client app using OAuth](https://github.com/databricks/databricks-sdk-py/blob/main/examples/local_browser_oauth.py)

These examples and more are located in the [`examples/` directory of the Github repository](https://github.com/databricks/databricks-sdk-py/tree/main/examples).

Some other examples of using the SDK include:
* [Unity Catalog Automated Migration](https://github.com/databricks/ucx) heavily relies on Python SDK for working with Databricks APIs.
* [ip-access-list-analyzer](https://github.com/alexott/databricks-playground/tree/main/ip-access-list-analyzer) checks & prunes invalid entries from IP Access Lists.

## Authentication<a id="authentication"></a>

If you use Databricks [configuration profiles](https://docs.databricks.com/dev-tools/auth.html#configuration-profiles)
or Databricks-specific [environment variables](https://docs.databricks.com/dev-tools/auth.html#environment-variables)
for [Databricks authentication](https://docs.databricks.com/dev-tools/auth.html), the only code required to start
working with a Databricks workspace is the following code snippet, which instructs the Databricks SDK for Python to use
its [default authentication flow](#default-authentication-flow):

```python
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
w. # press <TAB> for autocompletion
```

The conventional name for the variable that holds the workspace-level client of the Databricks SDK for Python is `w`, which is shorthand for `workspace`.

### In this section

- [Default authentication flow](#default-authentication-flow)
- [Databricks native authentication](#databricks-native-authentication)
- [Azure native authentication](#azure-native-authentication)
- [Overriding .databrickscfg](#overriding-databrickscfg)
- [Additional authentication configuration options](#additional-authentication-configuration-options)

### Default authentication flow

If you run the [Databricks Terraform Provider](https://registry.terraform.io/providers/databrickslabs/databricks/latest),
the [Databricks SDK for Go](https://github.com/databricks/databricks-sdk-go), the [Databricks CLI](https://docs.databricks.com/dev-tools/cli/index.html),
or applications that target the Databricks SDKs for other languages, most likely they will all interoperate nicely together.
By default, the Databricks SDK for Python tries the following [authentication](https://docs.databricks.com/dev-tools/auth.html) methods,
in the following order, until it succeeds:

1. [Databricks native authentication](#databricks-native-authentication)
2. [Azure native authentication](#azure-native-authentication)
4. If the SDK is unsuccessful at this point, it returns an authentication error and stops running.

You can instruct the Databricks SDK for Python to use a specific authentication method by setting the `auth_type` argument
as described in the following sections.

For each authentication method, the SDK searches for compatible authentication credentials in the following locations,
in the following order. Once the SDK finds a compatible set of credentials that it can use, it stops searching:

1. Credentials that are hard-coded into configuration arguments.

   :warning: **Caution**: Databricks does not recommend hard-coding credentials into arguments, as they can be exposed in plain text in version control systems. Use environment variables or configuration profiles instead.

2. Credentials in Databricks-specific [environment variables](https://docs.databricks.com/dev-tools/auth.html#environment-variables).
3. For Databricks native authentication, credentials in the `.databrickscfg` file's `DEFAULT` [configuration profile](https://docs.databricks.com/dev-tools/auth.html#configuration-profiles) from its default file location (`~` for Linux or macOS, and `%USERPROFILE%` for Windows).
4. For Azure native authentication, the SDK searches for credentials through the Azure CLI as needed.

Depending on the Databricks authentication method, the SDK uses the following information. Presented are the `WorkspaceClient` and `AccountClient` arguments (which have corresponding `.databrickscfg` file fields), their descriptions, and any corresponding environment variables.

### Databricks native authentication

By default, the Databricks SDK for Python initially tries [Databricks token authentication](https://docs.databricks.com/dev-tools/api/latest/authentication.html) (`auth_type='pat'` argument). If the SDK is unsuccessful, it then tries Databricks basic (username/password) authentication (`auth_type="basic"` argument).

- For Databricks token authentication, you must provide `host` and `token`; or their environment variable or `.databrickscfg` file field equivalents.
- For Databricks basic authentication, you must provide `host`, `username`, and `password` _(for AWS workspace-level operations)_; or `host`, `account_id`, `username`, and `password` _(for AWS, Azure, or GCP account-level operations)_; or their environment variable or `.databrickscfg` file field equivalents.

| Argument     | Description | Environment variable |
|--------------|-------------|-------------------|
| `host`       | _(String)_ The Databricks host URL for either the Databricks workspace endpoint or the Databricks accounts endpoint. | `DATABRICKS_HOST` |     
| `account_id` | _(String)_ The Databricks account ID for the Databricks accounts endpoint. Only has effect when `Host` is either `https://accounts.cloud.databricks.com/` _(AWS)_, `https://accounts.azuredatabricks.net/` _(Azure)_, or `https://accounts.gcp.databricks.com/` _(GCP)_. | `DATABRICKS_ACCOUNT_ID` |
| `token`      | _(String)_ The Databricks personal access token (PAT) _(AWS, Azure, and GCP)_ or Azure Active Directory (Azure AD) token _(Azure)_. | `DATABRICKS_TOKEN` |
| `username`   | _(String)_ The Databricks username part of basic authentication. Only possible when `Host` is `*.cloud.databricks.com` _(AWS)_. | `DATABRICKS_USERNAME` |
| `password`   | _(String)_ The Databricks password part of basic authentication. Only possible when `Host` is `*.cloud.databricks.com` _(AWS)_. | `DATABRICKS_PASSWORD` |

For example, to use Databricks token authentication:

```python
from databricks.sdk import WorkspaceClient
w = WorkspaceClient(host=input('Databricks Workspace URL: '), token=input('Token: '))
```

### Azure native authentication

By default, the Databricks SDK for Python first tries Azure client secret authentication (`auth_type='azure-client-secret'` argument). If the SDK is unsuccessful, it then tries Azure CLI authentication (`auth_type='azure-cli'` argument). See [Manage service principals](https://learn.microsoft.com/azure/databricks/administration-guide/users-groups/service-principals).

The Databricks SDK for Python picks up an Azure CLI token, if you've previously authenticated as an Azure user by running `az login` on your machine. See [Get Azure AD tokens for users by using the Azure CLI](https://learn.microsoft.com/azure/databricks/dev-tools/api/latest/aad/user-aad-token).

To authenticate as an Azure Active Directory (Azure AD) service principal, you must provide one of the following. See also [Add a service principal to your Azure Databricks account](https://learn.microsoft.com/azure/databricks/administration-guide/users-groups/service-principals#add-sp-account):

- `azure_workspace_resource_id`, `azure_client_secret`, `azure_client_id`, and `azure_tenant_id`; or their environment variable or `.databrickscfg` file field equivalents.
- `azure_workspace_resource_id` and `azure_use_msi`; or their environment variable or `.databrickscfg` file field equivalents.

| Argument              | Description | Environment variable |
|-----------------------|-------------|----------------------|
| `azure_workspace_resource_id`   | _(String)_ The Azure Resource Manager ID for the Azure Databricks workspace, which is exchanged for a Databricks host URL. | `DATABRICKS_AZURE_RESOURCE_ID` |
| `azure_use_msi`       | _(Boolean)_ `true` to use Azure Managed Service Identity passwordless authentication flow for service principals. _This feature is not yet implemented in the Databricks SDK for Python._ | `ARM_USE_MSI` |
| `azure_client_secret` | _(String)_ The Azure AD service principal's client secret. | `ARM_CLIENT_SECRET` |
| `azure_client_id`     | _(String)_ The Azure AD service principal's application ID. | `ARM_CLIENT_ID` |
| `azure_tenant_id`     | _(String)_ The Azure AD service principal's tenant ID. | `ARM_TENANT_ID` |
| `azure_environment`   | _(String)_ The Azure environment type (such as Public, UsGov, China, and Germany) for a specific set of API endpoints. Defaults to `PUBLIC`. | `ARM_ENVIRONMENT` |

For example, to use Azure client secret authentication:

```python
from databricks.sdk import WorkspaceClient
w = WorkspaceClient(host=input('Databricks Workspace URL: '),
                    azure_workspace_resource_id=input('Azure Resource ID: '),
                    azure_tenant_id=input('AAD Tenant ID: '),
                    azure_client_id=input('AAD Client ID: '),
                    azure_client_secret=input('AAD Client Secret: '))
```

Please see more examples in [this document](./docs/azure-ad.md).

### Google Cloud Platform native authentication

By default, the Databricks SDK for Python first tries GCP credentials authentication (`auth_type='google-credentials'`, argument). If the SDK is unsuccessful, it then tries Google Cloud Platform (GCP) ID authentication (`auth_type='google-id'`, argument).

The Databricks SDK for Python picks up an OAuth token in the scope of the Google Default Application Credentials (DAC) flow. This means that if you have run `gcloud auth application-default login` on your development machine, or launch the application on the compute, that is allowed to impersonate the Google Cloud service account specified in `google_service_account`. Authentication should then work out of the box. See [Creating and managing service accounts](https://cloud.google.com/iam/docs/creating-managing-service-accounts).

To authenticate as a Google Cloud service account, you must provide one of the following:

- `host` and `google_credentials`; or their environment variable or `.databrickscfg` file field equivalents.
- `host` and `google_service_account`; or their environment variable or `.databrickscfg` file field equivalents.

| Argument                 | Description | Environment variable |
|--------------------------|-------------|--------------------------------------------------|
| `google_credentials`     | _(String)_ GCP Service Account Credentials JSON or the location of these credentials on the local filesystem. | `GOOGLE_CREDENTIALS` |
| `google_service_account` | _(String)_ The Google Cloud Platform (GCP) service account e-mail used for impersonation in the Default Application Credentials Flow that does not require a password. | `DATABRICKS_GOOGLE_SERVICE_ACCOUNT` |

For example, to use Google ID authentication:

```python
from databricks.sdk import WorkspaceClient
w = WorkspaceClient(host=input('Databricks Workspace URL: '),
                    google_service_account=input('Google Service Account: '))

```

### Overriding `.databrickscfg`

For [Databricks native authentication](#databricks-native-authentication), you can override the default behavior for using `.databrickscfg` as follows:

| Argument      | Description | Environment variable |
|---------------|-------------|----------------------|
| `profile`     | _(String)_ A connection profile specified within `.databrickscfg` to use instead of `DEFAULT`. | `DATABRICKS_CONFIG_PROFILE` |
| `config_file` | _(String)_ A non-default location of the Databricks CLI credentials file. | `DATABRICKS_CONFIG_FILE` |

For example, to use a profile named `MYPROFILE` instead of `DEFAULT`:

```python
from databricks.sdk import WorkspaceClient
w = WorkspaceClient(profile='MYPROFILE')
# Now call the Databricks workspace APIs as desired...
```

### Additional authentication configuration options

For all authentication methods, you can override the default behavior in client arguments as follows:

| Argument                | Description | Environment variable   |
|-------------------------|-------------|------------------------|
| `auth_type`             | _(String)_ When multiple auth attributes are available in the environment, use the auth type specified by this argument. This argument also holds the currently selected auth. | `DATABRICKS_AUTH_TYPE` |
| `http_timeout_seconds`  | _(Integer)_ Number of seconds for HTTP timeout. Default is _60_. | _(None)_               |
| `retry_timeout_seconds` | _(Integer)_ Number of seconds to keep retrying HTTP requests. Default is _300 (5 minutes)_. | _(None)_               |
| `debug_truncate_bytes`  | _(Integer)_ Truncate JSON fields in debug logs above this limit. Default is 96. | `DATABRICKS_DEBUG_TRUNCATE_BYTES` |
| `debug_headers`         | _(Boolean)_ `true` to debug HTTP headers of requests made by the application. Default is `false`, as headers contain sensitive data, such as access tokens. | `DATABRICKS_DEBUG_HEADERS` |
| `rate_limit`            | _(Integer)_ Maximum number of requests per second made to Databricks REST API. | `DATABRICKS_RATE_LIMIT` |

For example, to turn on debug HTTP headers:

```python
from databricks.sdk import WorkspaceClient
w = WorkspaceClient(debug_headers=True)
# Now call the Databricks workspace APIs as desired...
```

## Long-running operations<a id="long-running-operations"></a>

When you invoke a long-running operation, the SDK provides a high-level API to _trigger_ these operations and _wait_ for the related entities
to reach the correct state or return the error message in case of failure. All long-running operations return generic `Wait` instance with `result()`
method to get a result of long-running operation, once it's finished. Databricks SDK for Python picks the most reasonable default timeouts for
every method, but sometimes you may find yourself in a situation, where you'd want to provide `datetime.timedelta()` as the value of `timeout`
argument to `result()` method.

There are a number of long-running operations in Databricks APIs such as managing:
* Clusters,
* Command execution
* Jobs
* Libraries
* Delta Live Tables pipelines
* Databricks SQL warehouses.

For example, in the Clusters API, once you create a cluster, you receive a cluster ID, and the cluster is in the `PENDING` state Meanwhile
Databricks takes care of provisioning virtual machines from the cloud provider in the background. The cluster is
only usable in the `RUNNING` state and so you have to wait for that state to be reached.

Another example is the API for running a job or repairing the run: right after
the run starts, the run is in the `PENDING` state. The job is only considered to be finished when it is in either
the `TERMINATED` or `SKIPPED` state. Also you would likely need the error message if the long-running
operation times out and fails with an error code. Other times you may want to configure a custom timeout other than
the default of 20 minutes.

In the following example, `w.clusters.create` returns `ClusterInfo` only once the cluster is in the `RUNNING` state,
otherwise it will timeout in 10 minutes:

```python
import datetime
import logging
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
info = w.clusters.create_and_wait(cluster_name='Created cluster',
                                  spark_version='12.0.x-scala2.12',
                                  node_type_id='m5d.large',
                                  autotermination_minutes=10,
                                  num_workers=1,
                                  timeout=datetime.timedelta(minutes=10))
logging.info(f'Created: {info}')
```

Please look at the `examples/starting_job_and_waiting.py` for a more advanced usage:

```python
import datetime
import logging
import time

from databricks.sdk import WorkspaceClient
import databricks.sdk.service.jobs as j

w = WorkspaceClient()

# create a dummy file on DBFS that just sleeps for 10 seconds
py_on_dbfs = f'/home/{w.current_user.me().user_name}/sample.py'
with w.dbfs.open(py_on_dbfs, write=True, overwrite=True) as f:
    f.write(b'import time; time.sleep(10); print("Hello, World!")')

# trigger one-time-run job and get waiter object
waiter = w.jobs.submit(run_name=f'py-sdk-run-{time.time()}', tasks=[
    j.RunSubmitTaskSettings(
        task_key='hello_world',
        new_cluster=j.BaseClusterInfo(
            spark_version=w.clusters.select_spark_version(long_term_support=True),
            node_type_id=w.clusters.select_node_type(local_disk=True),
            num_workers=1
        ),
        spark_python_task=j.SparkPythonTask(
            python_file=f'dbfs:{py_on_dbfs}'
        ),
    )
])

logging.info(f'starting to poll: {waiter.run_id}')

# callback, that receives a polled entity between state updates
def print_status(run: j.Run):
    statuses = [f'{t.task_key}: {t.state.life_cycle_state}' for t in run.tasks]
    logging.info(f'workflow intermediate status: {", ".join(statuses)}')

# If you want to perform polling in a separate thread, process, or service,
# you can use w.jobs.wait_get_run_job_terminated_or_skipped(
#   run_id=waiter.run_id,
#   timeout=datetime.timedelta(minutes=15),
#   callback=print_status) to achieve the same results.
#
# Waiter interface allows for `w.jobs.submit(..).result()` simplicity in
# the scenarios, where you need to block the calling thread for the job to finish.
run = waiter.result(timeout=datetime.timedelta(minutes=15),
                    callback=print_status)

logging.info(f'job finished: {run.run_page_url}')
```

## Paginated responses<a id="paginated-responses"></a>

On the platform side the Databricks APIs have different wait to deal with pagination:
* Some APIs follow the offset-plus-limit pagination
* Some start their offsets from 0 and some from 1
* Some use the cursor-based iteration
* Others just return all results in a single response

The Databricks SDK for Python hides this  complexity
under `Iterator[T]` abstraction, where multi-page results `yield` items. Python typing helps to auto-complete
the individual item fields.

```python
import logging
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
for repo in w.repos.list():
    logging.info(f'Found repo: {repo.path}')
```

Please look at the `examples/last_job_runs.py` for a more advanced usage:

```python
import logging
from collections import defaultdict
from datetime import datetime, timezone
from databricks.sdk import WorkspaceClient

latest_state = {}
all_jobs = {}
durations = defaultdict(list)

w = WorkspaceClient()
for job in w.jobs.list():
    all_jobs[job.job_id] = job
    for run in w.jobs.list_runs(job_id=job.job_id, expand_tasks=False):
        durations[job.job_id].append(run.run_duration)
        if job.job_id not in latest_state:
            latest_state[job.job_id] = run
            continue
        if run.end_time < latest_state[job.job_id].end_time:
            continue
        latest_state[job.job_id] = run

summary = []
for job_id, run in latest_state.items():
    summary.append({
        'job_name': all_jobs[job_id].settings.name,
        'last_status': run.state.result_state,
        'last_finished': datetime.fromtimestamp(run.end_time/1000, timezone.utc),
        'average_duration': sum(durations[job_id]) / len(durations[job_id])
    })

for line in sorted(summary, key=lambda s: s['last_finished'], reverse=True):
    logging.info(f'Latest: {line}')
```

## Single-Sign-On (SSO) with OAuth<a id="single-sign-on-sso-with-oauth"></a>

### Authorization Code flow with PKCE

For a regular web app running on a server, it's recommended to use the Authorization Code Flow to obtain an Access Token
and a Refresh Token. This method is considered safe because the Access Token is transmitted directly to the server
hosting the app, without passing through the user's web browser and risking exposure.

To enhance the security of the Authorization Code Flow, the PKCE (Proof Key for Code Exchange) mechanism can be
employed. With PKCE, the calling application generates a secret called the Code Verifier, which is verified by
the authorization server. The app also creates a transform value of the Code Verifier, called the Code Challenge,
and sends it over HTTPS to obtain an Authorization Code. By intercepting the Authorization Code, a malicious attacker
cannot exchange it for a token without possessing the Code Verifier.

The [presented sample](https://github.com/databricks/databricks-sdk-py/blob/main/examples/flask_app_with_oauth.py)
is a Python3 script that uses the Flask web framework along with Databricks SDK for Python to demonstrate how to
implement the OAuth Authorization Code flow with PKCE security. It can be used to build an app where each user uses
their identity to access Databricks resources. The script can be executed with or without client and secret credentials
for a custom OAuth app.

Databricks SDK for Python exposes the `oauth_client.initiate_consent()` helper to acquire user redirect URL and initiate
PKCE state verification. Application developers are expected to persist `RefreshableCredentials` in the webapp session
and restore it via `RefreshableCredentials.from_dict(oauth_client, session['creds'])` helpers.

Works for both AWS and Azure. Not supported for GCP at the moment.

```python
from databricks.sdk.oauth import OAuthClient

oauth_client = OAuthClient(host='<workspace-url>',
                           client_id='<oauth client ID>',
                           redirect_url=f'http://host.domain/callback',
                           scopes=['clusters'])

import secrets
from flask import Flask, render_template_string, request, redirect, url_for, session

APP_NAME = 'flask-demo'
app = Flask(APP_NAME)
app.secret_key = secrets.token_urlsafe(32)


@app.route('/callback')
def callback():
   from databricks.sdk.oauth import Consent
   consent = Consent.from_dict(oauth_client, session['consent'])
   session['creds'] = consent.exchange_callback_parameters(request.args).as_dict()
   return redirect(url_for('index'))


@app.route('/')
def index():
   if 'creds' not in session:
      consent = oauth_client.initiate_consent()
      session['consent'] = consent.as_dict()
      return redirect(consent.auth_url)

   from databricks.sdk import WorkspaceClient
   from databricks.sdk.oauth import SessionCredentials

   credentials_provider = SessionCredentials.from_dict(oauth_client, session['creds'])
   workspace_client = WorkspaceClient(host=oauth_client.host,
                                      product=APP_NAME,
                                      credentials_provider=credentials_provider)

   return render_template_string('...', w=workspace_client)
```

### SSO for local scripts on development machines

For applications, that do run on developer workstations, Databricks SDK for Python provides `auth_type='external-browser'`
utility, that opens up a browser for a user to go through SSO flow. Azure support is still in the early experimental
stage.

```python
from databricks.sdk import WorkspaceClient

host = input('Enter Databricks host: ')

w = WorkspaceClient(host=host, auth_type='external-browser')
clusters = w.clusters.list()

for cl in clusters:
    print(f' - {cl.cluster_name} is {cl.state}')
```

### Creating custom OAuth applications

In order to use OAuth with Databricks SDK for Python, you should use `account_client.custom_app_integration.create` API.

```python
import logging, getpass
from databricks.sdk import AccountClient
account_client = AccountClient(host='https://accounts.cloud.databricks.com',
                               account_id=input('Databricks Account ID: '),
                               username=input('Username: '),
                               password=getpass.getpass('Password: '))

logging.info('Enrolling all published apps...')
account_client.o_auth_enrollment.create(enable_all_published_apps=True)

status = account_client.o_auth_enrollment.get()
logging.info(f'Enrolled all published apps: {status}')

custom_app = account_client.custom_app_integration.create(
    name='awesome-app',
    redirect_urls=[f'https://host.domain/path/to/callback'],
    confidential=True)
logging.info(f'Created new custom app: '
             f'--client_id {custom_app.client_id} '
             f'--client_secret {custom_app.client_secret}')
```

## User Agent Request Attribution<a id="user-agent-request-attribution"></a>

The Databricks SDK for Python uses the `User-Agent` header to include request metadata along with each request. By default, this includes the version of the Python SDK, the version of the Python language used by your application, and the underlying operating system. To statically add additional metadata, you can use the `with_partner()` and `with_product()` functions in the `databricks.sdk.useragent` module. `with_partner()` can be used by partners to indicate that code using the Databricks SDK for Go should be attributed to a specific partner. Multiple partners can be registered at once. Partner names can contain any number, digit, `.`, `-`, `_` or `+`.

```python
from databricks.sdk import useragent
useragent.with_product("partner-abc")
useragent.with_partner("partner-xyz")
```

`with_product()` can be used to define the name and version of the product that is built with the Databricks SDK for Python. The product name has the same restrictions as the partner name above, and the product version must be a valid [SemVer](https://semver.org/). Subsequent calls to `with_product()` replace the original product with the new user-specified one.

```go
from databricks.sdk import useragent
useragent.with_product("databricks-example-product", "1.2.0")
```

If both the `DATABRICKS_SDK_UPSTREAM` and `DATABRICKS_SDK_UPSTREAM_VERSION` environment variables are defined, these will also be included in the `User-Agent` header.

If additional metadata needs to be specified that isn't already supported by the above interfaces, you can use the `with_user_agent_extra()` function to register arbitrary key-value pairs to include in the user agent. Multiple values associated with the same key are allowed. Keys have the same restrictions as the partner name above. Values must be either as described above or SemVer strings.

Additional `User-Agent` information can be associated with different instances of `DatabricksConfig`. To add metadata to a specific instance of `DatabricksConfig`, use the `with_user_agent_extra()` method.

## Error handling<a id="error-handling"></a>

The Databricks SDK for Python provides a robust error-handling mechanism that allows developers to catch and handle API errors. When an error occurs, the SDK will raise an exception that contains information about the error, such as the HTTP status code, error message, and error details. Developers can catch these exceptions and handle them appropriately in their code.

```python
from databricks.sdk import WorkspaceClient
from databricks.sdk.errors import ResourceDoesNotExist

w = WorkspaceClient()
try:
    w.clusters.get(cluster_id='1234-5678-9012')
except ResourceDoesNotExist as e:
    print(f'Cluster not found: {e}')
```

The SDK handles inconsistencies in error responses amongst the different services, providing a consistent interface for developers to work with. Simply catch the appropriate exception type and handle the error as needed. The errors returned by the Databricks API are defined in [databricks/sdk/errors/platform.py](https://github.com/databricks/databricks-sdk-py/blob/main/databricks/sdk/errors/platform.py).

## Logging<a id="logging"></a>

The Databricks SDK for Python seamlessly integrates with the standard [Logging facility for Python](https://docs.python.org/3/library/logging.html).
This allows developers to easily enable and customize logging for their Databricks Python projects.
To enable debug logging in your Databricks Python project, you can follow the example below:

```python
import logging, sys
logging.basicConfig(stream=sys.stderr,
                    level=logging.INFO,
                    format='%(asctime)s [%(name)s][%(levelname)s] %(message)s')
logging.getLogger('databricks.sdk').setLevel(logging.DEBUG)

from databricks.sdk import WorkspaceClient
w = WorkspaceClient(debug_truncate_bytes=1024, debug_headers=False)
for cluster in w.clusters.list():
    logging.info(f'Found cluster: {cluster.cluster_name}')
```

In the above code snippet, the logging module is imported and the `basicConfig()` method is used to set the logging level to `DEBUG`.
This will enable logging at the debug level and above. Developers can adjust the logging level as needed to control the verbosity of the logging output.
The SDK will log all requests and responses to standard error output, using the format `> ` for requests and `< ` for responses.
In some cases, requests or responses may be truncated due to size considerations. If this occurs, the log message will include
the text `... (XXX additional elements)` to indicate that the request or response has been truncated. To increase the truncation limits,
developers can set the `debug_truncate_bytes` configuration property or the `DATABRICKS_DEBUG_TRUNCATE_BYTES` environment variable.
To protect sensitive data, such as authentication tokens, passwords, or any HTTP headers, the SDK will automatically replace these
values with `**REDACTED**` in the log output. Developers can disable this redaction by setting the `debug_headers` configuration property to `True`.

```text
2023-03-22 21:19:21,702 [databricks.sdk][DEBUG] GET /api/2.0/clusters/list
< 200 OK
< {
<   "clusters": [
<     {
<       "autotermination_minutes": 60,
<       "cluster_id": "1109-115255-s1w13zjj",
<       "cluster_name": "DEFAULT Test Cluster",
<       ... truncated for brevity
<     },
<     "... (47 additional elements)"
<   ]
< }
```

Overall, the logging capabilities provided by the Databricks SDK for Python can be a powerful tool for monitoring and troubleshooting your
Databricks Python projects. Developers can use the various logging methods and configuration options provided by the SDK to customize
the logging output to their specific needs.

## Interaction with `dbutils`<a id="interaction-with-dbutils"></a>

You can use the client-side implementation of [`dbutils`](https://docs.databricks.com/dev-tools/databricks-utils.html) by accessing `dbutils` property on the `WorkspaceClient`.
Most of the `dbutils.fs` operations and `dbutils.secrets` are implemented natively in Python within Databricks SDK. Non-SDK implementations still require a Databricks cluster,
that you have to specify through the `cluster_id` configuration attribute or `DATABRICKS_CLUSTER_ID` environment variable. Don't worry if cluster is not running: internally,
Databricks SDK for Python calls `w.clusters.ensure_cluster_is_running()`.

```python
from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
dbutils = w.dbutils

files_in_root = dbutils.fs.ls('/')
print(f'number of files in root: {len(files_in_root)}')
```

Alternatively, you can import `dbutils` from `databricks.sdk.runtime` module, but you have to make sure that all configuration is already [present in the environment variables](#default-authentication-flow):

```python
from databricks.sdk.runtime import dbutils

for secret_scope in dbutils.secrets.listScopes():
    for secret_metadata in dbutils.secrets.list(secret_scope.name):
        print(f'found {secret_metadata.key} secret in {secret_scope.name} scope')
```

## Interface stability<a id="interface-stability"></a>

Databricks is actively working on stabilizing the Databricks SDK for Python's interfaces. 
API clients for all services are generated from specification files that are synchronized from the main platform. 
You are highly encouraged to pin the exact dependency version and read the [changelog](https://github.com/databricks/databricks-sdk-py/blob/main/CHANGELOG.md) 
where Databricks documents the changes. Databricks may have minor [documented](https://github.com/databricks/databricks-sdk-py/blob/main/CHANGELOG.md) 
backward-incompatible changes, such as renaming some type names to bring more consistency.

            

Raw data

            {
    "_id": null,
    "home_page": "https://databricks-sdk-py.readthedocs.io",
    "name": "databricks-sdk",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "databricks sdk",
    "author": "Serge Smertin",
    "author_email": "serge.smertin@databricks.com",
    "download_url": "https://files.pythonhosted.org/packages/d3/f7/684cb730cb908b23afc249583d7ffc692f64e0c45e49ce9ca22fba037475/databricks_sdk-0.37.0.tar.gz",
    "platform": null,
    "description": "# Databricks SDK for Python (Beta)\n\n[![PyPI - Downloads](https://img.shields.io/pypi/dw/databricks-sdk)](https://pypistats.org/packages/databricks-sdk)\n[![PyPI - License](https://img.shields.io/pypi/l/databricks-sdk)](https://github.com/databricks/databricks-sdk-py/blob/main/LICENSE)\n[![databricks-sdk](https://snyk.io/advisor/python/databricks-sdk/badge.svg)](https://snyk.io/advisor/python/databricks-sdk)\n![PyPI](https://img.shields.io/pypi/v/databricks-sdk)\n[![codecov](https://codecov.io/gh/databricks/databricks-sdk-py/branch/main/graph/badge.svg?token=GU63K7WDBE)](https://codecov.io/gh/databricks/databricks-sdk-py)\n[![lines of code](https://tokei.rs/b1/github/databricks/databricks-sdk-py)]([https://codecov.io/github/databricks/databricks-sdk-py](https://github.com/databricks/databricks-sdk-py))\n\n[Beta](https://docs.databricks.com/release-notes/release-types.html): This SDK is supported for production use cases, \nbut we do expect future releases to have some interface changes; see [Interface stability](#interface-stability). \nWe are keen to hear feedback from you on these SDKs. Please [file issues](https://github.com/databricks/databricks-sdk-py/issues), and we will address them. \n| See also the [SDK for Java](https://github.com/databricks/databricks-sdk-java) \n| See also the [SDK for Go](https://github.com/databricks/databricks-sdk-go) \n| See also the [Terraform Provider](https://github.com/databricks/terraform-provider-databricks)\n| See also cloud-specific docs ([AWS](https://docs.databricks.com/dev-tools/sdk-python.html), \n   [Azure](https://learn.microsoft.com/en-us/azure/databricks/dev-tools/sdk-python), \n   [GCP](https://docs.gcp.databricks.com/dev-tools/sdk-python.html)) \n| See also the [API reference on readthedocs](https://databricks-sdk-py.readthedocs.io/en/latest/)\n\nThe Databricks SDK for Python includes functionality to accelerate development with [Python](https://www.python.org/) for the Databricks Lakehouse.\nIt covers all public [Databricks REST API](https://docs.databricks.com/dev-tools/api/index.html) operations.\nThe SDK's internal HTTP client is robust and handles failures on different levels by performing intelligent retries.\n\n## Contents\n\n- [Getting started](#getting-started)\n- [Code examples](#code-examples)\n- [Authentication](#authentication)\n- [Long-running operations](#long-running-operations)\n- [Paginated responses](#paginated-responses)\n- [Single-sign-on with OAuth](#single-sign-on-sso-with-oauth)\n- [User Agent Request Attribution](#user-agent-request-attribution)\n- [Error handling](#error-handling)\n- [Logging](#logging)\n- [Integration with `dbutils`](#interaction-with-dbutils)\n- [Interface stability](#interface-stability)\n\n## Getting started<a id=\"getting-started\"></a>\n\n1. Please install Databricks SDK for Python via `pip install databricks-sdk` and instantiate `WorkspaceClient`:\n\n```python\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient()\nfor c in w.clusters.list():\n    print(c.cluster_name)\n```\n\nDatabricks SDK for Python is compatible with Python 3.7 _(until [June 2023](https://devguide.python.org/versions/))_, 3.8, 3.9, 3.10, and 3.11.  \n**Note:** Databricks Runtime starting from version 13.1 includes a bundled version of the Python SDK.  \nIt is highly recommended to upgrade to the latest version which you can do by running the following in a notebook cell:\n\n```python\n%pip install --upgrade databricks-sdk\n```\nfollowed by\n```python\ndbutils.library.restartPython()\n```\n## Code examples<a id=\"code-examples\"></a>\n\nThe Databricks SDK for Python comes with a number of examples demonstrating how to use the library for various common use-cases, including\n\n* [Using the SDK with OAuth from a webserver](https://github.com/databricks/databricks-sdk-py/blob/main/examples/flask_app_with_oauth.py)\n* [Using long-running operations](https://github.com/databricks/databricks-sdk-py/blob/main/examples/starting_job_and_waiting.py)\n* [Authenticating a client app using OAuth](https://github.com/databricks/databricks-sdk-py/blob/main/examples/local_browser_oauth.py)\n\nThese examples and more are located in the [`examples/` directory of the Github repository](https://github.com/databricks/databricks-sdk-py/tree/main/examples).\n\nSome other examples of using the SDK include:\n* [Unity Catalog Automated Migration](https://github.com/databricks/ucx) heavily relies on Python SDK for working with Databricks APIs.\n* [ip-access-list-analyzer](https://github.com/alexott/databricks-playground/tree/main/ip-access-list-analyzer) checks & prunes invalid entries from IP Access Lists.\n\n## Authentication<a id=\"authentication\"></a>\n\nIf you use Databricks [configuration profiles](https://docs.databricks.com/dev-tools/auth.html#configuration-profiles)\nor Databricks-specific [environment variables](https://docs.databricks.com/dev-tools/auth.html#environment-variables)\nfor [Databricks authentication](https://docs.databricks.com/dev-tools/auth.html), the only code required to start\nworking with a Databricks workspace is the following code snippet, which instructs the Databricks SDK for Python to use\nits [default authentication flow](#default-authentication-flow):\n\n```python\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient()\nw. # press <TAB> for autocompletion\n```\n\nThe conventional name for the variable that holds the workspace-level client of the Databricks SDK for Python is `w`, which is shorthand for `workspace`.\n\n### In this section\n\n- [Default authentication flow](#default-authentication-flow)\n- [Databricks native authentication](#databricks-native-authentication)\n- [Azure native authentication](#azure-native-authentication)\n- [Overriding .databrickscfg](#overriding-databrickscfg)\n- [Additional authentication configuration options](#additional-authentication-configuration-options)\n\n### Default authentication flow\n\nIf you run the [Databricks Terraform Provider](https://registry.terraform.io/providers/databrickslabs/databricks/latest),\nthe [Databricks SDK for Go](https://github.com/databricks/databricks-sdk-go), the [Databricks CLI](https://docs.databricks.com/dev-tools/cli/index.html),\nor applications that target the Databricks SDKs for other languages, most likely they will all interoperate nicely together.\nBy default, the Databricks SDK for Python tries the following [authentication](https://docs.databricks.com/dev-tools/auth.html) methods,\nin the following order, until it succeeds:\n\n1. [Databricks native authentication](#databricks-native-authentication)\n2. [Azure native authentication](#azure-native-authentication)\n4. If the SDK is unsuccessful at this point, it returns an authentication error and stops running.\n\nYou can instruct the Databricks SDK for Python to use a specific authentication method by setting the `auth_type` argument\nas described in the following sections.\n\nFor each authentication method, the SDK searches for compatible authentication credentials in the following locations,\nin the following order. Once the SDK finds a compatible set of credentials that it can use, it stops searching:\n\n1. Credentials that are hard-coded into configuration arguments.\n\n   :warning: **Caution**: Databricks does not recommend hard-coding credentials into arguments, as they can be exposed in plain text in version control systems. Use environment variables or configuration profiles instead.\n\n2. Credentials in Databricks-specific [environment variables](https://docs.databricks.com/dev-tools/auth.html#environment-variables).\n3. For Databricks native authentication, credentials in the `.databrickscfg` file's `DEFAULT` [configuration profile](https://docs.databricks.com/dev-tools/auth.html#configuration-profiles) from its default file location (`~` for Linux or macOS, and `%USERPROFILE%` for Windows).\n4. For Azure native authentication, the SDK searches for credentials through the Azure CLI as needed.\n\nDepending on the Databricks authentication method, the SDK uses the following information. Presented are the `WorkspaceClient` and `AccountClient` arguments (which have corresponding `.databrickscfg` file fields), their descriptions, and any corresponding environment variables.\n\n### Databricks native authentication\n\nBy default, the Databricks SDK for Python initially tries [Databricks token authentication](https://docs.databricks.com/dev-tools/api/latest/authentication.html) (`auth_type='pat'` argument). If the SDK is unsuccessful, it then tries Databricks basic (username/password) authentication (`auth_type=\"basic\"` argument).\n\n- For Databricks token authentication, you must provide `host` and `token`; or their environment variable or `.databrickscfg` file field equivalents.\n- For Databricks basic authentication, you must provide `host`, `username`, and `password` _(for AWS workspace-level operations)_; or `host`, `account_id`, `username`, and `password` _(for AWS, Azure, or GCP account-level operations)_; or their environment variable or `.databrickscfg` file field equivalents.\n\n| Argument     | Description | Environment variable |\n|--------------|-------------|-------------------|\n| `host`       | _(String)_ The Databricks host URL for either the Databricks workspace endpoint or the Databricks accounts endpoint. | `DATABRICKS_HOST` |     \n| `account_id` | _(String)_ The Databricks account ID for the Databricks accounts endpoint. Only has effect when `Host` is either `https://accounts.cloud.databricks.com/` _(AWS)_, `https://accounts.azuredatabricks.net/` _(Azure)_, or `https://accounts.gcp.databricks.com/` _(GCP)_. | `DATABRICKS_ACCOUNT_ID` |\n| `token`      | _(String)_ The Databricks personal access token (PAT) _(AWS, Azure, and GCP)_ or Azure Active Directory (Azure AD) token _(Azure)_. | `DATABRICKS_TOKEN` |\n| `username`   | _(String)_ The Databricks username part of basic authentication. Only possible when `Host` is `*.cloud.databricks.com` _(AWS)_. | `DATABRICKS_USERNAME` |\n| `password`   | _(String)_ The Databricks password part of basic authentication. Only possible when `Host` is `*.cloud.databricks.com` _(AWS)_. | `DATABRICKS_PASSWORD` |\n\nFor example, to use Databricks token authentication:\n\n```python\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient(host=input('Databricks Workspace URL: '), token=input('Token: '))\n```\n\n### Azure native authentication\n\nBy default, the Databricks SDK for Python first tries Azure client secret authentication (`auth_type='azure-client-secret'` argument). If the SDK is unsuccessful, it then tries Azure CLI authentication (`auth_type='azure-cli'` argument). See [Manage service principals](https://learn.microsoft.com/azure/databricks/administration-guide/users-groups/service-principals).\n\nThe Databricks SDK for Python picks up an Azure CLI token, if you've previously authenticated as an Azure user by running `az login` on your machine. See [Get Azure AD tokens for users by using the Azure CLI](https://learn.microsoft.com/azure/databricks/dev-tools/api/latest/aad/user-aad-token).\n\nTo authenticate as an Azure Active Directory (Azure AD) service principal, you must provide one of the following. See also [Add a service principal to your Azure Databricks account](https://learn.microsoft.com/azure/databricks/administration-guide/users-groups/service-principals#add-sp-account):\n\n- `azure_workspace_resource_id`, `azure_client_secret`, `azure_client_id`, and `azure_tenant_id`; or their environment variable or `.databrickscfg` file field equivalents.\n- `azure_workspace_resource_id` and `azure_use_msi`; or their environment variable or `.databrickscfg` file field equivalents.\n\n| Argument              | Description | Environment variable |\n|-----------------------|-------------|----------------------|\n| `azure_workspace_resource_id`   | _(String)_ The Azure Resource Manager ID for the Azure Databricks workspace, which is exchanged for a Databricks host URL. | `DATABRICKS_AZURE_RESOURCE_ID` |\n| `azure_use_msi`       | _(Boolean)_ `true` to use Azure Managed Service Identity passwordless authentication flow for service principals. _This feature is not yet implemented in the Databricks SDK for Python._ | `ARM_USE_MSI` |\n| `azure_client_secret` | _(String)_ The Azure AD service principal's client secret. | `ARM_CLIENT_SECRET` |\n| `azure_client_id`     | _(String)_ The Azure AD service principal's application ID. | `ARM_CLIENT_ID` |\n| `azure_tenant_id`     | _(String)_ The Azure AD service principal's tenant ID. | `ARM_TENANT_ID` |\n| `azure_environment`   | _(String)_ The Azure environment type (such as Public, UsGov, China, and Germany) for a specific set of API endpoints. Defaults to `PUBLIC`. | `ARM_ENVIRONMENT` |\n\nFor example, to use Azure client secret authentication:\n\n```python\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient(host=input('Databricks Workspace URL: '),\n                    azure_workspace_resource_id=input('Azure Resource ID: '),\n                    azure_tenant_id=input('AAD Tenant ID: '),\n                    azure_client_id=input('AAD Client ID: '),\n                    azure_client_secret=input('AAD Client Secret: '))\n```\n\nPlease see more examples in [this document](./docs/azure-ad.md).\n\n### Google Cloud Platform native authentication\n\nBy default, the Databricks SDK for Python first tries GCP credentials authentication (`auth_type='google-credentials'`, argument). If the SDK is unsuccessful, it then tries Google Cloud Platform (GCP) ID authentication (`auth_type='google-id'`, argument).\n\nThe Databricks SDK for Python picks up an OAuth token in the scope of the Google Default Application Credentials (DAC) flow. This means that if you have run `gcloud auth application-default login` on your development machine, or launch the application on the compute, that is allowed to impersonate the Google Cloud service account specified in `google_service_account`. Authentication should then work out of the box. See [Creating and managing service accounts](https://cloud.google.com/iam/docs/creating-managing-service-accounts).\n\nTo authenticate as a Google Cloud service account, you must provide one of the following:\n\n- `host` and `google_credentials`; or their environment variable or `.databrickscfg` file field equivalents.\n- `host` and `google_service_account`; or their environment variable or `.databrickscfg` file field equivalents.\n\n| Argument                 | Description | Environment variable |\n|--------------------------|-------------|--------------------------------------------------|\n| `google_credentials`     | _(String)_ GCP Service Account Credentials JSON or the location of these credentials on the local filesystem. | `GOOGLE_CREDENTIALS` |\n| `google_service_account` | _(String)_ The Google Cloud Platform (GCP) service account e-mail used for impersonation in the Default Application Credentials Flow that does not require a password. | `DATABRICKS_GOOGLE_SERVICE_ACCOUNT` |\n\nFor example, to use Google ID authentication:\n\n```python\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient(host=input('Databricks Workspace URL: '),\n                    google_service_account=input('Google Service Account: '))\n\n```\n\n### Overriding `.databrickscfg`\n\nFor [Databricks native authentication](#databricks-native-authentication), you can override the default behavior for using `.databrickscfg` as follows:\n\n| Argument      | Description | Environment variable |\n|---------------|-------------|----------------------|\n| `profile`     | _(String)_ A connection profile specified within `.databrickscfg` to use instead of `DEFAULT`. | `DATABRICKS_CONFIG_PROFILE` |\n| `config_file` | _(String)_ A non-default location of the Databricks CLI credentials file. | `DATABRICKS_CONFIG_FILE` |\n\nFor example, to use a profile named `MYPROFILE` instead of `DEFAULT`:\n\n```python\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient(profile='MYPROFILE')\n# Now call the Databricks workspace APIs as desired...\n```\n\n### Additional authentication configuration options\n\nFor all authentication methods, you can override the default behavior in client arguments as follows:\n\n| Argument                | Description | Environment variable   |\n|-------------------------|-------------|------------------------|\n| `auth_type`             | _(String)_ When multiple auth attributes are available in the environment, use the auth type specified by this argument. This argument also holds the currently selected auth. | `DATABRICKS_AUTH_TYPE` |\n| `http_timeout_seconds`  | _(Integer)_ Number of seconds for HTTP timeout. Default is _60_. | _(None)_               |\n| `retry_timeout_seconds` | _(Integer)_ Number of seconds to keep retrying HTTP requests. Default is _300 (5 minutes)_. | _(None)_               |\n| `debug_truncate_bytes`  | _(Integer)_ Truncate JSON fields in debug logs above this limit. Default is 96. | `DATABRICKS_DEBUG_TRUNCATE_BYTES` |\n| `debug_headers`         | _(Boolean)_ `true` to debug HTTP headers of requests made by the application. Default is `false`, as headers contain sensitive data, such as access tokens. | `DATABRICKS_DEBUG_HEADERS` |\n| `rate_limit`            | _(Integer)_ Maximum number of requests per second made to Databricks REST API. | `DATABRICKS_RATE_LIMIT` |\n\nFor example, to turn on debug HTTP headers:\n\n```python\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient(debug_headers=True)\n# Now call the Databricks workspace APIs as desired...\n```\n\n## Long-running operations<a id=\"long-running-operations\"></a>\n\nWhen you invoke a long-running operation, the SDK provides a high-level API to _trigger_ these operations and _wait_ for the related entities\nto reach the correct state or return the error message in case of failure. All long-running operations return generic `Wait` instance with `result()`\nmethod to get a result of long-running operation, once it's finished. Databricks SDK for Python picks the most reasonable default timeouts for\nevery method, but sometimes you may find yourself in a situation, where you'd want to provide `datetime.timedelta()` as the value of `timeout`\nargument to `result()` method.\n\nThere are a number of long-running operations in Databricks APIs such as managing:\n* Clusters,\n* Command execution\n* Jobs\n* Libraries\n* Delta Live Tables pipelines\n* Databricks SQL warehouses.\n\nFor example, in the Clusters API, once you create a cluster, you receive a cluster ID, and the cluster is in the `PENDING` state Meanwhile\nDatabricks takes care of provisioning virtual machines from the cloud provider in the background. The cluster is\nonly usable in the `RUNNING` state and so you have to wait for that state to be reached.\n\nAnother example is the API for running a job or repairing the run: right after\nthe run starts, the run is in the `PENDING` state. The job is only considered to be finished when it is in either\nthe `TERMINATED` or `SKIPPED` state. Also you would likely need the error message if the long-running\noperation times out and fails with an error code. Other times you may want to configure a custom timeout other than\nthe default of 20 minutes.\n\nIn the following example, `w.clusters.create` returns `ClusterInfo` only once the cluster is in the `RUNNING` state,\notherwise it will timeout in 10 minutes:\n\n```python\nimport datetime\nimport logging\nfrom databricks.sdk import WorkspaceClient\n\nw = WorkspaceClient()\ninfo = w.clusters.create_and_wait(cluster_name='Created cluster',\n                                  spark_version='12.0.x-scala2.12',\n                                  node_type_id='m5d.large',\n                                  autotermination_minutes=10,\n                                  num_workers=1,\n                                  timeout=datetime.timedelta(minutes=10))\nlogging.info(f'Created: {info}')\n```\n\nPlease look at the `examples/starting_job_and_waiting.py` for a more advanced usage:\n\n```python\nimport datetime\nimport logging\nimport time\n\nfrom databricks.sdk import WorkspaceClient\nimport databricks.sdk.service.jobs as j\n\nw = WorkspaceClient()\n\n# create a dummy file on DBFS that just sleeps for 10 seconds\npy_on_dbfs = f'/home/{w.current_user.me().user_name}/sample.py'\nwith w.dbfs.open(py_on_dbfs, write=True, overwrite=True) as f:\n    f.write(b'import time; time.sleep(10); print(\"Hello, World!\")')\n\n# trigger one-time-run job and get waiter object\nwaiter = w.jobs.submit(run_name=f'py-sdk-run-{time.time()}', tasks=[\n    j.RunSubmitTaskSettings(\n        task_key='hello_world',\n        new_cluster=j.BaseClusterInfo(\n            spark_version=w.clusters.select_spark_version(long_term_support=True),\n            node_type_id=w.clusters.select_node_type(local_disk=True),\n            num_workers=1\n        ),\n        spark_python_task=j.SparkPythonTask(\n            python_file=f'dbfs:{py_on_dbfs}'\n        ),\n    )\n])\n\nlogging.info(f'starting to poll: {waiter.run_id}')\n\n# callback, that receives a polled entity between state updates\ndef print_status(run: j.Run):\n    statuses = [f'{t.task_key}: {t.state.life_cycle_state}' for t in run.tasks]\n    logging.info(f'workflow intermediate status: {\", \".join(statuses)}')\n\n# If you want to perform polling in a separate thread, process, or service,\n# you can use w.jobs.wait_get_run_job_terminated_or_skipped(\n#   run_id=waiter.run_id,\n#   timeout=datetime.timedelta(minutes=15),\n#   callback=print_status) to achieve the same results.\n#\n# Waiter interface allows for `w.jobs.submit(..).result()` simplicity in\n# the scenarios, where you need to block the calling thread for the job to finish.\nrun = waiter.result(timeout=datetime.timedelta(minutes=15),\n                    callback=print_status)\n\nlogging.info(f'job finished: {run.run_page_url}')\n```\n\n## Paginated responses<a id=\"paginated-responses\"></a>\n\nOn the platform side the Databricks APIs have different wait to deal with pagination:\n* Some APIs follow the offset-plus-limit pagination\n* Some start their offsets from 0 and some from 1\n* Some use the cursor-based iteration\n* Others just return all results in a single response\n\nThe Databricks SDK for Python hides this  complexity\nunder `Iterator[T]` abstraction, where multi-page results `yield` items. Python typing helps to auto-complete\nthe individual item fields.\n\n```python\nimport logging\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient()\nfor repo in w.repos.list():\n    logging.info(f'Found repo: {repo.path}')\n```\n\nPlease look at the `examples/last_job_runs.py` for a more advanced usage:\n\n```python\nimport logging\nfrom collections import defaultdict\nfrom datetime import datetime, timezone\nfrom databricks.sdk import WorkspaceClient\n\nlatest_state = {}\nall_jobs = {}\ndurations = defaultdict(list)\n\nw = WorkspaceClient()\nfor job in w.jobs.list():\n    all_jobs[job.job_id] = job\n    for run in w.jobs.list_runs(job_id=job.job_id, expand_tasks=False):\n        durations[job.job_id].append(run.run_duration)\n        if job.job_id not in latest_state:\n            latest_state[job.job_id] = run\n            continue\n        if run.end_time < latest_state[job.job_id].end_time:\n            continue\n        latest_state[job.job_id] = run\n\nsummary = []\nfor job_id, run in latest_state.items():\n    summary.append({\n        'job_name': all_jobs[job_id].settings.name,\n        'last_status': run.state.result_state,\n        'last_finished': datetime.fromtimestamp(run.end_time/1000, timezone.utc),\n        'average_duration': sum(durations[job_id]) / len(durations[job_id])\n    })\n\nfor line in sorted(summary, key=lambda s: s['last_finished'], reverse=True):\n    logging.info(f'Latest: {line}')\n```\n\n## Single-Sign-On (SSO) with OAuth<a id=\"single-sign-on-sso-with-oauth\"></a>\n\n### Authorization Code flow with PKCE\n\nFor a regular web app running on a server, it's recommended to use the Authorization Code Flow to obtain an Access Token\nand a Refresh Token. This method is considered safe because the Access Token is transmitted directly to the server\nhosting the app, without passing through the user's web browser and risking exposure.\n\nTo enhance the security of the Authorization Code Flow, the PKCE (Proof Key for Code Exchange) mechanism can be\nemployed. With PKCE, the calling application generates a secret called the Code Verifier, which is verified by\nthe authorization server. The app also creates a transform value of the Code Verifier, called the Code Challenge,\nand sends it over HTTPS to obtain an Authorization Code. By intercepting the Authorization Code, a malicious attacker\ncannot exchange it for a token without possessing the Code Verifier.\n\nThe [presented sample](https://github.com/databricks/databricks-sdk-py/blob/main/examples/flask_app_with_oauth.py)\nis a Python3 script that uses the Flask web framework along with Databricks SDK for Python to demonstrate how to\nimplement the OAuth Authorization Code flow with PKCE security. It can be used to build an app where each user uses\ntheir identity to access Databricks resources. The script can be executed with or without client and secret credentials\nfor a custom OAuth app.\n\nDatabricks SDK for Python exposes the `oauth_client.initiate_consent()` helper to acquire user redirect URL and initiate\nPKCE state verification. Application developers are expected to persist `RefreshableCredentials` in the webapp session\nand restore it via `RefreshableCredentials.from_dict(oauth_client, session['creds'])` helpers.\n\nWorks for both AWS and Azure. Not supported for GCP at the moment.\n\n```python\nfrom databricks.sdk.oauth import OAuthClient\n\noauth_client = OAuthClient(host='<workspace-url>',\n                           client_id='<oauth client ID>',\n                           redirect_url=f'http://host.domain/callback',\n                           scopes=['clusters'])\n\nimport secrets\nfrom flask import Flask, render_template_string, request, redirect, url_for, session\n\nAPP_NAME = 'flask-demo'\napp = Flask(APP_NAME)\napp.secret_key = secrets.token_urlsafe(32)\n\n\n@app.route('/callback')\ndef callback():\n   from databricks.sdk.oauth import Consent\n   consent = Consent.from_dict(oauth_client, session['consent'])\n   session['creds'] = consent.exchange_callback_parameters(request.args).as_dict()\n   return redirect(url_for('index'))\n\n\n@app.route('/')\ndef index():\n   if 'creds' not in session:\n      consent = oauth_client.initiate_consent()\n      session['consent'] = consent.as_dict()\n      return redirect(consent.auth_url)\n\n   from databricks.sdk import WorkspaceClient\n   from databricks.sdk.oauth import SessionCredentials\n\n   credentials_provider = SessionCredentials.from_dict(oauth_client, session['creds'])\n   workspace_client = WorkspaceClient(host=oauth_client.host,\n                                      product=APP_NAME,\n                                      credentials_provider=credentials_provider)\n\n   return render_template_string('...', w=workspace_client)\n```\n\n### SSO for local scripts on development machines\n\nFor applications, that do run on developer workstations, Databricks SDK for Python provides `auth_type='external-browser'`\nutility, that opens up a browser for a user to go through SSO flow. Azure support is still in the early experimental\nstage.\n\n```python\nfrom databricks.sdk import WorkspaceClient\n\nhost = input('Enter Databricks host: ')\n\nw = WorkspaceClient(host=host, auth_type='external-browser')\nclusters = w.clusters.list()\n\nfor cl in clusters:\n    print(f' - {cl.cluster_name} is {cl.state}')\n```\n\n### Creating custom OAuth applications\n\nIn order to use OAuth with Databricks SDK for Python, you should use `account_client.custom_app_integration.create` API.\n\n```python\nimport logging, getpass\nfrom databricks.sdk import AccountClient\naccount_client = AccountClient(host='https://accounts.cloud.databricks.com',\n                               account_id=input('Databricks Account ID: '),\n                               username=input('Username: '),\n                               password=getpass.getpass('Password: '))\n\nlogging.info('Enrolling all published apps...')\naccount_client.o_auth_enrollment.create(enable_all_published_apps=True)\n\nstatus = account_client.o_auth_enrollment.get()\nlogging.info(f'Enrolled all published apps: {status}')\n\ncustom_app = account_client.custom_app_integration.create(\n    name='awesome-app',\n    redirect_urls=[f'https://host.domain/path/to/callback'],\n    confidential=True)\nlogging.info(f'Created new custom app: '\n             f'--client_id {custom_app.client_id} '\n             f'--client_secret {custom_app.client_secret}')\n```\n\n## User Agent Request Attribution<a id=\"user-agent-request-attribution\"></a>\n\nThe Databricks SDK for Python uses the `User-Agent` header to include request metadata along with each request. By default, this includes the version of the Python SDK, the version of the Python language used by your application, and the underlying operating system. To statically add additional metadata, you can use the `with_partner()` and `with_product()` functions in the `databricks.sdk.useragent` module. `with_partner()` can be used by partners to indicate that code using the Databricks SDK for Go should be attributed to a specific partner. Multiple partners can be registered at once. Partner names can contain any number, digit, `.`, `-`, `_` or `+`.\n\n```python\nfrom databricks.sdk import useragent\nuseragent.with_product(\"partner-abc\")\nuseragent.with_partner(\"partner-xyz\")\n```\n\n`with_product()` can be used to define the name and version of the product that is built with the Databricks SDK for Python. The product name has the same restrictions as the partner name above, and the product version must be a valid [SemVer](https://semver.org/). Subsequent calls to `with_product()` replace the original product with the new user-specified one.\n\n```go\nfrom databricks.sdk import useragent\nuseragent.with_product(\"databricks-example-product\", \"1.2.0\")\n```\n\nIf both the `DATABRICKS_SDK_UPSTREAM` and `DATABRICKS_SDK_UPSTREAM_VERSION` environment variables are defined, these will also be included in the `User-Agent` header.\n\nIf additional metadata needs to be specified that isn't already supported by the above interfaces, you can use the `with_user_agent_extra()` function to register arbitrary key-value pairs to include in the user agent. Multiple values associated with the same key are allowed. Keys have the same restrictions as the partner name above. Values must be either as described above or SemVer strings.\n\nAdditional `User-Agent` information can be associated with different instances of `DatabricksConfig`. To add metadata to a specific instance of `DatabricksConfig`, use the `with_user_agent_extra()` method.\n\n## Error handling<a id=\"error-handling\"></a>\n\nThe Databricks SDK for Python provides a robust error-handling mechanism that allows developers to catch and handle API errors. When an error occurs, the SDK will raise an exception that contains information about the error, such as the HTTP status code, error message, and error details. Developers can catch these exceptions and handle them appropriately in their code.\n\n```python\nfrom databricks.sdk import WorkspaceClient\nfrom databricks.sdk.errors import ResourceDoesNotExist\n\nw = WorkspaceClient()\ntry:\n    w.clusters.get(cluster_id='1234-5678-9012')\nexcept ResourceDoesNotExist as e:\n    print(f'Cluster not found: {e}')\n```\n\nThe SDK handles inconsistencies in error responses amongst the different services, providing a consistent interface for developers to work with. Simply catch the appropriate exception type and handle the error as needed. The errors returned by the Databricks API are defined in [databricks/sdk/errors/platform.py](https://github.com/databricks/databricks-sdk-py/blob/main/databricks/sdk/errors/platform.py).\n\n## Logging<a id=\"logging\"></a>\n\nThe Databricks SDK for Python seamlessly integrates with the standard [Logging facility for Python](https://docs.python.org/3/library/logging.html).\nThis allows developers to easily enable and customize logging for their Databricks Python projects.\nTo enable debug logging in your Databricks Python project, you can follow the example below:\n\n```python\nimport logging, sys\nlogging.basicConfig(stream=sys.stderr,\n                    level=logging.INFO,\n                    format='%(asctime)s [%(name)s][%(levelname)s] %(message)s')\nlogging.getLogger('databricks.sdk').setLevel(logging.DEBUG)\n\nfrom databricks.sdk import WorkspaceClient\nw = WorkspaceClient(debug_truncate_bytes=1024, debug_headers=False)\nfor cluster in w.clusters.list():\n    logging.info(f'Found cluster: {cluster.cluster_name}')\n```\n\nIn the above code snippet, the logging module is imported and the `basicConfig()` method is used to set the logging level to `DEBUG`.\nThis will enable logging at the debug level and above. Developers can adjust the logging level as needed to control the verbosity of the logging output.\nThe SDK will log all requests and responses to standard error output, using the format `> ` for requests and `< ` for responses.\nIn some cases, requests or responses may be truncated due to size considerations. If this occurs, the log message will include\nthe text `... (XXX additional elements)` to indicate that the request or response has been truncated. To increase the truncation limits,\ndevelopers can set the `debug_truncate_bytes` configuration property or the `DATABRICKS_DEBUG_TRUNCATE_BYTES` environment variable.\nTo protect sensitive data, such as authentication tokens, passwords, or any HTTP headers, the SDK will automatically replace these\nvalues with `**REDACTED**` in the log output. Developers can disable this redaction by setting the `debug_headers` configuration property to `True`.\n\n```text\n2023-03-22 21:19:21,702 [databricks.sdk][DEBUG] GET /api/2.0/clusters/list\n< 200 OK\n< {\n<   \"clusters\": [\n<     {\n<       \"autotermination_minutes\": 60,\n<       \"cluster_id\": \"1109-115255-s1w13zjj\",\n<       \"cluster_name\": \"DEFAULT Test Cluster\",\n<       ... truncated for brevity\n<     },\n<     \"... (47 additional elements)\"\n<   ]\n< }\n```\n\nOverall, the logging capabilities provided by the Databricks SDK for Python can be a powerful tool for monitoring and troubleshooting your\nDatabricks Python projects. Developers can use the various logging methods and configuration options provided by the SDK to customize\nthe logging output to their specific needs.\n\n## Interaction with `dbutils`<a id=\"interaction-with-dbutils\"></a>\n\nYou can use the client-side implementation of [`dbutils`](https://docs.databricks.com/dev-tools/databricks-utils.html) by accessing `dbutils` property on the `WorkspaceClient`.\nMost of the `dbutils.fs` operations and `dbutils.secrets` are implemented natively in Python within Databricks SDK. Non-SDK implementations still require a Databricks cluster,\nthat you have to specify through the `cluster_id` configuration attribute or `DATABRICKS_CLUSTER_ID` environment variable. Don't worry if cluster is not running: internally,\nDatabricks SDK for Python calls `w.clusters.ensure_cluster_is_running()`.\n\n```python\nfrom databricks.sdk import WorkspaceClient\n\nw = WorkspaceClient()\ndbutils = w.dbutils\n\nfiles_in_root = dbutils.fs.ls('/')\nprint(f'number of files in root: {len(files_in_root)}')\n```\n\nAlternatively, you can import `dbutils` from `databricks.sdk.runtime` module, but you have to make sure that all configuration is already [present in the environment variables](#default-authentication-flow):\n\n```python\nfrom databricks.sdk.runtime import dbutils\n\nfor secret_scope in dbutils.secrets.listScopes():\n    for secret_metadata in dbutils.secrets.list(secret_scope.name):\n        print(f'found {secret_metadata.key} secret in {secret_scope.name} scope')\n```\n\n## Interface stability<a id=\"interface-stability\"></a>\n\nDatabricks is actively working on stabilizing the Databricks SDK for Python's interfaces. \nAPI clients for all services are generated from specification files that are synchronized from the main platform. \nYou are highly encouraged to pin the exact dependency version and read the [changelog](https://github.com/databricks/databricks-sdk-py/blob/main/CHANGELOG.md) \nwhere Databricks documents the changes. Databricks may have minor [documented](https://github.com/databricks/databricks-sdk-py/blob/main/CHANGELOG.md) \nbackward-incompatible changes, such as renaming some type names to bring more consistency.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Databricks SDK for Python (Beta)",
    "version": "0.37.0",
    "project_urls": {
        "Homepage": "https://databricks-sdk-py.readthedocs.io"
    },
    "split_keywords": [
        "databricks",
        "sdk"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9a790cc70123d89c524e900b1c3a097aaa2f3ef9586d3510593ce55bb5b1d598",
                "md5": "2e972c389f0023a356b1fae478d711c2",
                "sha256": "8fc333af657cbc4e46264560af20f0afb9510e25dea272cd84f665d586f83494"
            },
            "downloads": -1,
            "filename": "databricks_sdk-0.37.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2e972c389f0023a356b1fae478d711c2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 571425,
            "upload_time": "2024-11-14T12:09:18",
            "upload_time_iso_8601": "2024-11-14T12:09:18.265434Z",
            "url": "https://files.pythonhosted.org/packages/9a/79/0cc70123d89c524e900b1c3a097aaa2f3ef9586d3510593ce55bb5b1d598/databricks_sdk-0.37.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d3f7684cb730cb908b23afc249583d7ffc692f64e0c45e49ce9ca22fba037475",
                "md5": "12909c667f68d15e15207ebe8727839e",
                "sha256": "92c3159729e136ed8cd1630153855fb3b3afb23172d293a48a2dff55f960bd6b"
            },
            "downloads": -1,
            "filename": "databricks_sdk-0.37.0.tar.gz",
            "has_sig": false,
            "md5_digest": "12909c667f68d15e15207ebe8727839e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 589134,
            "upload_time": "2024-11-14T12:09:20",
            "upload_time_iso_8601": "2024-11-14T12:09:20.850922Z",
            "url": "https://files.pythonhosted.org/packages/d3/f7/684cb730cb908b23afc249583d7ffc692f64e0c45e49ce9ca22fba037475/databricks_sdk-0.37.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-14 12:09:20",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "databricks-sdk"
}
        
Elapsed time: 1.09320s