Name | databricks-sql-connector JSON |
Version |
3.1.2
JSON |
| download |
home_page | None |
Summary | Databricks SQL Connector for Python |
upload_time | 2024-04-18 17:09:33 |
maintainer | None |
docs_url | None |
author | Databricks |
requires_python | <4.0.0,>=3.8.0 |
license | Apache-2.0 |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Databricks SQL Connector for Python
[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)
[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)
The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies.
This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.
You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).
## Requirements
Python 3.8 or above is required.
## Documentation
For the latest documentation, see
- [Databricks](https://docs.databricks.com/dev-tools/python-sql-connector.html)
- [Azure Databricks](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector)
## Quickstart
Install the library with `pip install databricks-sql-connector`
Note: Don't hard-code authentication secrets into your Python. Use environment variables
```bash
export DATABRICKS_HOST=********.databricks.com
export DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************
export DATABRICKS_TOKEN=dapi********************************
```
Example usage:
```python
import os
from databricks import sql
host = os.getenv("DATABRICKS_HOST")
http_path = os.getenv("DATABRICKS_HTTP_PATH")
access_token = os.getenv("DATABRICKS_TOKEN")
connection = sql.connect(
server_hostname=host,
http_path=http_path,
access_token=access_token)
cursor = connection.cursor()
cursor.execute('SELECT :param `p`, * FROM RANGE(10)', {"param": "foo"})
result = cursor.fetchall()
for row in result:
print(row)
cursor.close()
connection.close()
```
In the above example:
- `server-hostname` is the Databricks instance host name.
- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),
or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)
- `personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md)
## License
[Apache License 2.0](LICENSE)
Raw data
{
"_id": null,
"home_page": null,
"name": "databricks-sql-connector",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0.0,>=3.8.0",
"maintainer_email": null,
"keywords": null,
"author": "Databricks",
"author_email": "databricks-sql-connector-maintainers@databricks.com",
"download_url": "https://files.pythonhosted.org/packages/2a/6f/720038e1eff6b195921635ef920bdafc48af2462b8e3e7aa49e9f502ecd5/databricks_sql_connector-3.1.2.tar.gz",
"platform": null,
"description": "# Databricks SQL Connector for Python\n\n[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)\n[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)\n\nThe Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies.\n\nThis connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.\n\nYou are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).\n\n## Requirements\n\nPython 3.8 or above is required.\n\n## Documentation\n\nFor the latest documentation, see\n\n- [Databricks](https://docs.databricks.com/dev-tools/python-sql-connector.html)\n- [Azure Databricks](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector)\n\n## Quickstart\n\nInstall the library with `pip install databricks-sql-connector`\n\nNote: Don't hard-code authentication secrets into your Python. Use environment variables\n\n```bash\nexport DATABRICKS_HOST=********.databricks.com\nexport DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************\nexport DATABRICKS_TOKEN=dapi********************************\n```\n\nExample usage:\n```python\nimport os\nfrom databricks import sql\n\nhost = os.getenv(\"DATABRICKS_HOST\")\nhttp_path = os.getenv(\"DATABRICKS_HTTP_PATH\")\naccess_token = os.getenv(\"DATABRICKS_TOKEN\")\n\nconnection = sql.connect(\n server_hostname=host,\n http_path=http_path,\n access_token=access_token)\n\ncursor = connection.cursor()\ncursor.execute('SELECT :param `p`, * FROM RANGE(10)', {\"param\": \"foo\"})\nresult = cursor.fetchall()\nfor row in result:\n print(row)\n\ncursor.close()\nconnection.close()\n```\n\nIn the above example:\n- `server-hostname` is the Databricks instance host name.\n- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),\nor to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)\n- `personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries\n\n\n## Contributing\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md)\n\n## License\n\n[Apache License 2.0](LICENSE)\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Databricks SQL Connector for Python",
"version": "3.1.2",
"project_urls": {
"Bug Tracker": "https://github.com/databricks/databricks-sql-python/issues",
"Homepage": "https://github.com/databricks/databricks-sql-python"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9a276e0abf319f51a9dc2cadf148708206513decf90718f5fb6771449de3aa90",
"md5": "3652b4d06f694c4d0fec26d364cea61e",
"sha256": "5292bc25b4d8d58d301079b55086331764f067e24862c9365698b2eeddedb737"
},
"downloads": -1,
"filename": "databricks_sql_connector-3.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3652b4d06f694c4d0fec26d364cea61e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.8.0",
"size": 427152,
"upload_time": "2024-04-18T17:09:30",
"upload_time_iso_8601": "2024-04-18T17:09:30.973629Z",
"url": "https://files.pythonhosted.org/packages/9a/27/6e0abf319f51a9dc2cadf148708206513decf90718f5fb6771449de3aa90/databricks_sql_connector-3.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2a6f720038e1eff6b195921635ef920bdafc48af2462b8e3e7aa49e9f502ecd5",
"md5": "ea09f4bac4facc8e683f9a728d2456a2",
"sha256": "da0df114e0824d49ccfea36c4679c95689fe359191b056ad516446a058307c37"
},
"downloads": -1,
"filename": "databricks_sql_connector-3.1.2.tar.gz",
"has_sig": false,
"md5_digest": "ea09f4bac4facc8e683f9a728d2456a2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.8.0",
"size": 409856,
"upload_time": "2024-04-18T17:09:33",
"upload_time_iso_8601": "2024-04-18T17:09:33.367388Z",
"url": "https://files.pythonhosted.org/packages/2a/6f/720038e1eff6b195921635ef920bdafc48af2462b8e3e7aa49e9f502ecd5/databricks_sql_connector-3.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-18 17:09:33",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "databricks",
"github_project": "databricks-sql-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "databricks-sql-connector"
}