| Name | databricks-sql-connector JSON |
| Version |
4.2.0
JSON |
| download |
| home_page | None |
| Summary | Databricks SQL Connector for Python |
| upload_time | 2025-11-14 08:04:19 |
| maintainer | None |
| docs_url | None |
| author | Databricks |
| requires_python | <4.0.0,>=3.8.0 |
| license | Apache-2.0 |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# Databricks SQL Connector for Python
[](https://pypi.org/project/databricks-sql-connector/)
[](https://pepy.tech/project/databricks-sql-connector)
The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/).
This connector uses Arrow as the data-exchange format, and supports APIs (e.g. `fetchmany_arrow`) to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time. [PyArrow](https://arrow.apache.org/docs/python/index.html) is required to enable this and use these APIs, you can install it via `pip install pyarrow` or `pip install databricks-sql-connector[pyarrow]`.
The connector includes built-in support for HTTP/HTTPS proxy servers with multiple authentication methods including basic authentication and Kerberos/Negotiate authentication. See `docs/proxy.md` and `examples/proxy_authentication.py` for details.
You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).
## Requirements
Python 3.8 or above is required.
## Documentation
For the latest documentation, see
- [Databricks](https://docs.databricks.com/dev-tools/python-sql-connector.html)
- [Azure Databricks](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector)
## Quickstart
### Installing the core library
Install using `pip install databricks-sql-connector`
### Installing the core library with PyArrow
Install using `pip install databricks-sql-connector[pyarrow]`
```bash
export DATABRICKS_HOST=********.databricks.com
export DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************
```
Example usage:
```python
import os
from databricks import sql
host = os.getenv("DATABRICKS_HOST")
http_path = os.getenv("DATABRICKS_HTTP_PATH")
connection = sql.connect(
server_hostname=host,
http_path=http_path)
cursor = connection.cursor()
cursor.execute('SELECT :param `p`, * FROM RANGE(10)', {"param": "foo"})
result = cursor.fetchall()
for row in result:
print(row)
cursor.close()
connection.close()
```
In the above example:
- `server-hostname` is the Databricks instance host name.
- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),
or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)
> Note: This example uses [Databricks OAuth U2M](https://docs.databricks.com/en/dev-tools/auth/oauth-u2m.html)
> to authenticate the target Databricks user account and needs to open the browser for authentication. So it
> can only run on the user's machine.
## Transaction Support
The connector supports multi-statement transactions with manual commit/rollback control. Set `connection.autocommit = False` to disable autocommit mode, then use `connection.commit()` and `connection.rollback()` to control transactions.
For detailed documentation, examples, and best practices, see **[TRANSACTIONS.md](TRANSACTIONS.md)**.
## SQLAlchemy
Starting from `databricks-sql-connector` version 4.0.0 SQLAlchemy support has been extracted to a new library `databricks-sqlalchemy`.
- Github repository [databricks-sqlalchemy github](https://github.com/databricks/databricks-sqlalchemy)
- PyPI [databricks-sqlalchemy pypi](https://pypi.org/project/databricks-sqlalchemy/)
### Quick SQLAlchemy guide
Users can now choose between using the SQLAlchemy v1 or SQLAlchemy v2 dialects with the connector core
- Install the latest SQLAlchemy v1 using `pip install databricks-sqlalchemy~=1.0`
- Install SQLAlchemy v2 using `pip install databricks-sqlalchemy`
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md)
## License
[Apache License 2.0](LICENSE)
Raw data
{
"_id": null,
"home_page": null,
"name": "databricks-sql-connector",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0.0,>=3.8.0",
"maintainer_email": null,
"keywords": null,
"author": "Databricks",
"author_email": "databricks-sql-connector-maintainers@databricks.com",
"download_url": "https://files.pythonhosted.org/packages/01/1a/9d63a395a7d235d1cbd236932e6583ac0dbef980e0f43f3f43080285df0f/databricks_sql_connector-4.2.0.tar.gz",
"platform": null,
"description": "# Databricks SQL Connector for Python\n\n[](https://pypi.org/project/databricks-sql-connector/)\n[](https://pepy.tech/project/databricks-sql-connector)\n\nThe Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/).\n\nThis connector uses Arrow as the data-exchange format, and supports APIs (e.g. `fetchmany_arrow`) to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time. [PyArrow](https://arrow.apache.org/docs/python/index.html) is required to enable this and use these APIs, you can install it via `pip install pyarrow` or `pip install databricks-sql-connector[pyarrow]`.\n\nThe connector includes built-in support for HTTP/HTTPS proxy servers with multiple authentication methods including basic authentication and Kerberos/Negotiate authentication. See `docs/proxy.md` and `examples/proxy_authentication.py` for details.\n\nYou are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).\n\n## Requirements\n\nPython 3.8 or above is required.\n\n## Documentation\n\nFor the latest documentation, see\n\n- [Databricks](https://docs.databricks.com/dev-tools/python-sql-connector.html)\n- [Azure Databricks](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector)\n\n## Quickstart\n\n### Installing the core library\nInstall using `pip install databricks-sql-connector`\n\n### Installing the core library with PyArrow\nInstall using `pip install databricks-sql-connector[pyarrow]`\n\n\n```bash\nexport DATABRICKS_HOST=********.databricks.com\nexport DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************\n```\n\nExample usage:\n```python\nimport os\nfrom databricks import sql\n\nhost = os.getenv(\"DATABRICKS_HOST\")\nhttp_path = os.getenv(\"DATABRICKS_HTTP_PATH\")\n\nconnection = sql.connect(\n server_hostname=host,\n http_path=http_path)\n\ncursor = connection.cursor()\ncursor.execute('SELECT :param `p`, * FROM RANGE(10)', {\"param\": \"foo\"})\nresult = cursor.fetchall()\nfor row in result:\n print(row)\n\ncursor.close()\nconnection.close()\n```\n\nIn the above example:\n- `server-hostname` is the Databricks instance host name.\n- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),\nor to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)\n\n> Note: This example uses [Databricks OAuth U2M](https://docs.databricks.com/en/dev-tools/auth/oauth-u2m.html) \n> to authenticate the target Databricks user account and needs to open the browser for authentication. So it \n> can only run on the user's machine.\n\n## Transaction Support\n\nThe connector supports multi-statement transactions with manual commit/rollback control. Set `connection.autocommit = False` to disable autocommit mode, then use `connection.commit()` and `connection.rollback()` to control transactions.\n\nFor detailed documentation, examples, and best practices, see **[TRANSACTIONS.md](TRANSACTIONS.md)**.\n\n## SQLAlchemy\nStarting from `databricks-sql-connector` version 4.0.0 SQLAlchemy support has been extracted to a new library `databricks-sqlalchemy`.\n\n- Github repository [databricks-sqlalchemy github](https://github.com/databricks/databricks-sqlalchemy)\n- PyPI [databricks-sqlalchemy pypi](https://pypi.org/project/databricks-sqlalchemy/)\n\n### Quick SQLAlchemy guide\nUsers can now choose between using the SQLAlchemy v1 or SQLAlchemy v2 dialects with the connector core\n\n- Install the latest SQLAlchemy v1 using `pip install databricks-sqlalchemy~=1.0`\n- Install SQLAlchemy v2 using `pip install databricks-sqlalchemy`\n\n\n## Contributing\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md)\n\n## License\n\n[Apache License 2.0](LICENSE)\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Databricks SQL Connector for Python",
"version": "4.2.0",
"project_urls": {
"Bug Tracker": "https://github.com/databricks/databricks-sql-python/issues",
"Homepage": "https://github.com/databricks/databricks-sql-python"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a4f0a278600c3d11eecb17459f3237e2a3a52eaff21df958123ab8e3d92d7591",
"md5": "88c994b3f86757a5913c1d5c4cf49be0",
"sha256": "981f631ed8caa892745c7fdbc136fb085848e45a495ca3ecebead4fc5723813e"
},
"downloads": -1,
"filename": "databricks_sql_connector-4.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "88c994b3f86757a5913c1d5c4cf49be0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.8.0",
"size": 206254,
"upload_time": "2025-11-14T08:04:17",
"upload_time_iso_8601": "2025-11-14T08:04:17.795685Z",
"url": "https://files.pythonhosted.org/packages/a4/f0/a278600c3d11eecb17459f3237e2a3a52eaff21df958123ab8e3d92d7591/databricks_sql_connector-4.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "011a9d63a395a7d235d1cbd236932e6583ac0dbef980e0f43f3f43080285df0f",
"md5": "959f2e349dfe22ed55a51f26afaf7f78",
"sha256": "2682f8a86b8e9c664a27cd26943ad8269925664c720eada291106f2a62f2aaab"
},
"downloads": -1,
"filename": "databricks_sql_connector-4.2.0.tar.gz",
"has_sig": false,
"md5_digest": "959f2e349dfe22ed55a51f26afaf7f78",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.8.0",
"size": 182178,
"upload_time": "2025-11-14T08:04:19",
"upload_time_iso_8601": "2025-11-14T08:04:19.480405Z",
"url": "https://files.pythonhosted.org/packages/01/1a/9d63a395a7d235d1cbd236932e6583ac0dbef980e0f43f3f43080285df0f/databricks_sql_connector-4.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-14 08:04:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "databricks",
"github_project": "databricks-sql-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "databricks-sql-connector"
}