in-dbt-spark


Namein-dbt-spark JSON
Version 1.9.8 PyPI version JSON
download
home_pageNone
SummaryRelease for LinkedIn's changes to dbt-spark.
upload_time2025-07-24 05:26:44
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9.0
licenseNone
keywords linkedin adapter adapters database dbt core dbt labs dbt-core elt in-dbt spark
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img
        src="https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg"
        alt="dbt logo"
        width="500"
    />
</p>

<p align="center">
    <a href="https://pypi.org/project/dbt-spark/">
        <img src="https://badge.fury.io/py/dbt-spark.svg" />
    </a>
    <a target="_blank" href="https://pypi.org/project/dbt-spark/" style="background:none">
        <img src="https://img.shields.io/pypi/pyversions/dbt-spark">
    </a>
    <a href="https://github.com/psf/black">
        <img src="https://img.shields.io/badge/code%20style-black-000000.svg" />
    </a>
    <a href="https://github.com/python/mypy">
        <img src="https://www.mypy-lang.org/static/mypy_badge.svg" />
    </a>
    <a href="https://pepy.tech/project/dbt-spark">
        <img src="https://static.pepy.tech/badge/dbt-spark/month" />
    </a>
</p>

# dbt

**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.

## dbt-spark

`dbt-spark` enables dbt to work with Apache Spark.
For more information on using dbt with Spark, consult [the docs](https://docs.getdbt.com/docs/profile-spark).

# Getting started

Review the repository [README.md](../README.md) as most of that information pertains to `dbt-spark`.

## Running locally

A `docker-compose` environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.
Note: dbt-spark now supports Spark 3.3.2.

The following command starts two docker containers:

```sh
docker-compose up -d
```

It will take a bit of time for the instance to start, you can check the logs of the two containers.
If the instance doesn't start correctly, try the complete reset command listed below and then try start again.

Create a profile like this one:

```yaml
spark_testing:
  target: local
  outputs:
    local:
      type: spark
      method: thrift
      host: 127.0.0.1
      port: 10000
      user: dbt
      schema: analytics
      connect_retries: 5
      connect_timeout: 60
      retry_all: true
```

Connecting to the local spark instance:

* The Spark UI should be available at [http://localhost:4040/sqlserver/](http://localhost:4040/sqlserver/)
* The endpoint for SQL-based testing is at `http://localhost:10000` and can be referenced with the Hive or Spark JDBC drivers using connection string `jdbc:hive2://localhost:10000` and default credentials `dbt`:`dbt`

Note that the Hive metastore data is persisted under `./.hive-metastore/`, and the Spark-produced data under `./.spark-warehouse/`. To completely reset you environment run the following:

```sh
docker-compose down
rm -rf ./.hive-metastore/
rm -rf ./.spark-warehouse/
```

## Additional Configuration for MacOS

If installing on MacOS, use `homebrew` to install required dependencies.
   ```sh
   brew install unixodbc
   ```

## Contribute

- Want to help us build `dbt-spark`? Check out the [Contributing Guide](CONTRIBUTING.md).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "in-dbt-spark",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9.0",
    "maintainer_email": "LinkedIn DBT Team <dbt-eng@linkedin.com>",
    "keywords": "LinkedIn, adapter, adapters, database, dbt Core, dbt Labs, dbt-core, elt, in-dbt, spark",
    "author": null,
    "author_email": "LinkedIn DBT Team <dbt-eng@linkedin.com>",
    "download_url": "https://files.pythonhosted.org/packages/ec/e7/28137b972ea0181f599c9ddaf3ab6c1d69fc0a49e2ed7afa083e812e245b/in_dbt_spark-1.9.8.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <img\n        src=\"https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg\"\n        alt=\"dbt logo\"\n        width=\"500\"\n    />\n</p>\n\n<p align=\"center\">\n    <a href=\"https://pypi.org/project/dbt-spark/\">\n        <img src=\"https://badge.fury.io/py/dbt-spark.svg\" />\n    </a>\n    <a target=\"_blank\" href=\"https://pypi.org/project/dbt-spark/\" style=\"background:none\">\n        <img src=\"https://img.shields.io/pypi/pyversions/dbt-spark\">\n    </a>\n    <a href=\"https://github.com/psf/black\">\n        <img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\" />\n    </a>\n    <a href=\"https://github.com/python/mypy\">\n        <img src=\"https://www.mypy-lang.org/static/mypy_badge.svg\" />\n    </a>\n    <a href=\"https://pepy.tech/project/dbt-spark\">\n        <img src=\"https://static.pepy.tech/badge/dbt-spark/month\" />\n    </a>\n</p>\n\n# dbt\n\n**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.\n\ndbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.\n\n## dbt-spark\n\n`dbt-spark` enables dbt to work with Apache Spark.\nFor more information on using dbt with Spark, consult [the docs](https://docs.getdbt.com/docs/profile-spark).\n\n# Getting started\n\nReview the repository [README.md](../README.md) as most of that information pertains to `dbt-spark`.\n\n## Running locally\n\nA `docker-compose` environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.\nNote: dbt-spark now supports Spark 3.3.2.\n\nThe following command starts two docker containers:\n\n```sh\ndocker-compose up -d\n```\n\nIt will take a bit of time for the instance to start, you can check the logs of the two containers.\nIf the instance doesn't start correctly, try the complete reset command listed below and then try start again.\n\nCreate a profile like this one:\n\n```yaml\nspark_testing:\n  target: local\n  outputs:\n    local:\n      type: spark\n      method: thrift\n      host: 127.0.0.1\n      port: 10000\n      user: dbt\n      schema: analytics\n      connect_retries: 5\n      connect_timeout: 60\n      retry_all: true\n```\n\nConnecting to the local spark instance:\n\n* The Spark UI should be available at [http://localhost:4040/sqlserver/](http://localhost:4040/sqlserver/)\n* The endpoint for SQL-based testing is at `http://localhost:10000` and can be referenced with the Hive or Spark JDBC drivers using connection string `jdbc:hive2://localhost:10000` and default credentials `dbt`:`dbt`\n\nNote that the Hive metastore data is persisted under `./.hive-metastore/`, and the Spark-produced data under `./.spark-warehouse/`. To completely reset you environment run the following:\n\n```sh\ndocker-compose down\nrm -rf ./.hive-metastore/\nrm -rf ./.spark-warehouse/\n```\n\n## Additional Configuration for MacOS\n\nIf installing on MacOS, use `homebrew` to install required dependencies.\n   ```sh\n   brew install unixodbc\n   ```\n\n## Contribute\n\n- Want to help us build `dbt-spark`? Check out the [Contributing Guide](CONTRIBUTING.md).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Release for LinkedIn's changes to dbt-spark.",
    "version": "1.9.8",
    "project_urls": {
        "Changelog": "https://github.com/linkedin-managed/in-dbt-adapters/blob/main/dbt-spark/CHANGELOG.md",
        "Documentation": "https://docs.getdbt.com",
        "Homepage": "https://github.com/dbt-labs/dbt-adapters/tree/main/dbt-spark",
        "Issues": "https://github.com/linkedin-managed/in-dbt-adapters",
        "Repository": "https://github.com/linkedin-managed/in-dbt-adapters#subdirectory=dbt-spark"
    },
    "split_keywords": [
        "linkedin",
        " adapter",
        " adapters",
        " database",
        " dbt core",
        " dbt labs",
        " dbt-core",
        " elt",
        " in-dbt",
        " spark"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "05fbd1f71100a09f4a1927bd3676b33729af6c880bb37212e67fbe1cdff16a15",
                "md5": "eaad579caa4dd1b7bbf24eefbe6e76fb",
                "sha256": "772f6338db882a5c8209f8065a78261cd591046ba3a6be122f8533134fdf0fe9"
            },
            "downloads": -1,
            "filename": "in_dbt_spark-1.9.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "eaad579caa4dd1b7bbf24eefbe6e76fb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9.0",
            "size": 93213,
            "upload_time": "2025-07-24T05:26:42",
            "upload_time_iso_8601": "2025-07-24T05:26:42.038999Z",
            "url": "https://files.pythonhosted.org/packages/05/fb/d1f71100a09f4a1927bd3676b33729af6c880bb37212e67fbe1cdff16a15/in_dbt_spark-1.9.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ece728137b972ea0181f599c9ddaf3ab6c1d69fc0a49e2ed7afa083e812e245b",
                "md5": "405def8279fe84f17feb85f38b3313c5",
                "sha256": "c4c0427f0f85aeb048027f51dfb2b22daae01f43daa96b2c3dbfcf647c3df81d"
            },
            "downloads": -1,
            "filename": "in_dbt_spark-1.9.8.tar.gz",
            "has_sig": false,
            "md5_digest": "405def8279fe84f17feb85f38b3313c5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9.0",
            "size": 104842,
            "upload_time": "2025-07-24T05:26:44",
            "upload_time_iso_8601": "2025-07-24T05:26:44.264551Z",
            "url": "https://files.pythonhosted.org/packages/ec/e7/28137b972ea0181f599c9ddaf3ab6c1d69fc0a49e2ed7afa083e812e245b/in_dbt_spark-1.9.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-24 05:26:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "linkedin-managed",
    "github_project": "in-dbt-adapters",
    "github_not_found": true,
    "lcname": "in-dbt-spark"
}
        
Elapsed time: 1.74204s