laktory


Namelaktory JSON
Version 0.6.2 PyPI version JSON
download
home_pageNone
SummaryAn ETL and DataOps framework for building a lakehouse
upload_time2025-01-22 03:28:35
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2023 okube Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords apache-spark data-pipeline dataframes etl infrastructure-as-code polars python sql
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Laktory

[![pypi](https://img.shields.io/pypi/v/laktory.svg)](https://pypi.org/project/laktory/)
[![test](https://github.com/okube-ai/laktory/actions/workflows/test.yml/badge.svg)](https://github.com/okube-ai/laktory/actions/workflows/test.yml)
[![downloads](https://static.pepy.tech/badge/laktory/month)](https://pepy.tech/project/laktory)
[![versions](https://img.shields.io/pypi/pyversions/laktory.svg)](https://github.com/okube-ai/laktory)
[![license](https://img.shields.io/github/license/okube-ai/laktory.svg)](https://github.com/okube-ai/laktory/blob/main/LICENSE)

An open-source DataOps and dataframe-centric ETL framework for building 
lakehouses.

<img src="docs/images/logo_sg.png" alt="laktory logo" width="85"/>

Laktory is your all-in-one solution for defining both data transformations and 
Databricks resources. Imagine if Terraform, Databricks Asset Bundles, and dbt
combined forces—that’s essentially Laktory.

This open-source framework simplifies the creation, deployment, and execution 
of data pipelines while adhering to essential DevOps practices like version 
control, code reviews, and CI/CD integration. With Apache Spark and Polars
driving its data transformation, Laktory ensures reliable and scalable data
processing. Its modular, flexible approach allows you to seamlessly combine SQL
statements with DataFrame operations.

<img src="docs/images/laktory_diagram.png" alt="what is laktory" width="800"/>

Since Laktory pipelines are built on top of Spark and Polars, they can run in
any environment that supports python—from your local machine to a Kubernetes 
cluster. They can also be deployed and orchestrated as Databricks Jobs or
[Delta Live Tables](https://www.databricks.com/product/delta-live-tables),
offering a simple, fully managed, and low-maintenance solution.

But Laktory goes beyond data pipelines. It empowers you to define and deploy 
your entire Databricks data platform—from Unity Catalog and access grants
to compute and quality monitoring—providing a complete, modern solution
for data platform management. This empowers your data team to take full 
ownership of the solution, eliminating the need to juggle multiple technologies.
Say goodbye to relying on external Terraform experts to handle compute, workspace
configuration, and Unity Catalog, while your data engineers and analysts try 
to combine Databricks Asset Bundles and dbt to build data pipelines. Laktory
consolidates these functions, simplifying the entire process and reducing
the overall cost.

<img src="docs/images/why_simplicity.png" alt="dataops" width="500"/>


## Help
See [documentation](https://www.laktory.ai/) for more details.

## Installation
Install using 
```commandline
pip install laktory
```

For more installation options,
see the [Install](https://www.laktory.ai/install/) section in the documentation.

## A Basic Example
```py
from laktory import models


node_brz = models.PipelineNode(
    name="brz_stock_prices",
    source={
        "format": "PARQUET",
        "path": "./data/brz_stock_prices/"
    },
    transformer={
        "nodes": [
        ]
    }
)

node_slv = models.PipelineNode(
    name="slv_stock_prices",
    source={
        "node_name": "brz_stock_prices"
    },
    sinks=[{
        "path": "./data/slv_stock_prices",
        "mode": "OVERWRITE",
        "format": "PARQUET",
    }],
    transformer={
        "nodes": [
            
            # SQL Transformation
            {
                "sql_expr": """
                    SELECT
                      data.created_at AS created_at,
                      data.symbol AS symbol,
                      data.open AS open,
                      data.close AS close,
                      data.high AS high,
                      data.low AS low,
                      data.volume AS volume
                    FROM
                      {df}
                """   
            },
            
            # Spark Transformation
            {
                "func_name": "drop_duplicates",
                "func_kwargs": {
                    "subset": ["created_at", "symbol"]
                }
            },
        ]
    }
)

pipeline = models.Pipeline(
    name="stock_prices",
    nodes=[node_brz, node_slv],
)

print(pipeline)
#> resource_name_=None options=ResourceOptions(variables={}, depends_on=[], provider=None, aliases=None, delete_before_replace=True, ignore_changes=None, import_=None, parent=None, replace_on_changes=None) variables={} databricks_job=None dlt=None name='stock_prices' nodes=[PipelineNode(variables={}, add_layer_columns=True, dlt_template='DEFAULT', description=None, drop_duplicates=None, drop_source_columns=False, transformer=SparkChain(variables={}, nodes=[SparkChainNode(variables={}, allow_missing_column_args=False, column=None, spark_func_args=[SparkFuncArg(variables={}, value='symbol'), SparkFuncArg(variables={}, value='timestamp'), SparkFuncArg(variables={}, value='open'), SparkFuncArg(variables={}, value='close')], spark_func_kwargs={}, spark_func_name='select', sql_expression=None)]), expectations=[], layer='BRONZE', name='brz_stock_prices', primary_key=None, sink=None, source=FileDataSource(variables={}, as_stream=False, broadcast=False, cdc=None, dataframe_backend='SPARK', drops=None, filter=None, mock_df=None, renames=None, selects=None, watermark=None, format='PARQUET', header=True, multiline=False, path='./data/brz_stock_prices/', read_options={}, schema_location=None), timestamp_key=None), PipelineNode(variables={}, add_layer_columns=True, dlt_template='DEFAULT', description=None, drop_duplicates=None, drop_source_columns=True, transformer=SparkChain(variables={}, nodes=[SparkChainNode(variables={}, allow_missing_column_args=False, column=None, spark_func_args=[], spark_func_kwargs={'subset': SparkFuncArg(variables={}, value=['timestamp', 'symbol'])}, spark_func_name='drop_duplicates', sql_expression=None)]), expectations=[], layer='SILVER', name='slv_stock_prices', primary_key=None, sink=FileDataSink(variables={}, mode='OVERWRITE', checkpoint_location=None, format='PARQUET', path='./data/slv_stock_prices', write_options={}), source=PipelineNodeDataSource(variables={}, as_stream=False, broadcast=False, cdc=None, dataframe_backend='SPARK', drops=None, filter=None, mock_df=None, renames=None, selects=None, watermark=None, node_name='brz_stock_prices', node=PipelineNode(variables={}, add_layer_columns=True, dlt_template='DEFAULT', description=None, drop_duplicates=None, drop_source_columns=False, transformer=SparkChain(variables={}, nodes=[SparkChainNode(variables={}, allow_missing_column_args=False, column=None, spark_func_args=[SparkFuncArg(variables={}, value='symbol'), SparkFuncArg(variables={}, value='timestamp'), SparkFuncArg(variables={}, value='open'), SparkFuncArg(variables={}, value='close')], spark_func_kwargs={}, spark_func_name='select', sql_expression=None)]), expectations=[], layer='BRONZE', name='brz_stock_prices', primary_key=None, sink=None, source=FileDataSource(variables={}, as_stream=False, broadcast=False, cdc=None, dataframe_backend='SPARK', drops=None, filter=None, mock_df=None, renames=None, selects=None, watermark=None, format='PARQUET', header=True, multiline=False, path='./data/brz_stock_prices/', read_options={}, schema_location=None), timestamp_key=None)), timestamp_key=None)] orchestrator=None udfs=[]

pipeline.execute(spark=spark)
```

To get started with a more useful example, jump into the [Quickstart](https://www.laktory.ai/quickstart/).

## Get Involved
Laktory is growing rapidly, and we’d love for you to be part of our journey! Here’s how 
you can get involved:
- **Join the Community**: Connect with fellow Laktory users and contributors on our [Slack](http://okube.slack.com/). Share ideas, ask questions, and collaborate!
- **Suggest Features or Report Issues**: Have an idea for a new feature or encountering an issue? Let us know on [GitHub Issues](https://github.com/okube-ai/laktory/issues). Your feedback helps shape the future of Laktory!
- Contribute to Laktory: Check out our [contributing guide](CONTRIBUTING.md)  to learn how you can tackle issues and add value to the project.

## A Lakehouse DataOps Template
A comprehensive template on how to deploy a lakehouse as code using Laktory is maintained here:
https://github.com/okube-ai/lakehouse-as-code.

In this template, 4 pulumi projects are used to:
- `{cloud_provider}_infra`: Deploy the required resources on your cloud provider
- `unity-catalog`: Setup users, groups, catalogs, schemas and manage grants
- `workspace`: Setup secrets, clusters and warehouses and common files/notebooks
- `workflows`: The data workflows to build your lakehouse

## Okube Company
<img src="docs/images/okube.png" alt="okube logo" width="85"/>

[Okube](https://www.okube.ai) is dedicated to building open source frameworks, known as the *kubes*, empowering businesses to build, deploy and operate highly scalable data platforms and AI models. 


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "laktory",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "apache-spark, data-pipeline, dataframes, etl, infrastructure-as-code, polars, python, sql",
    "author": null,
    "author_email": "Olivier Soucy <olivier.soucy@okube.ai>",
    "download_url": "https://files.pythonhosted.org/packages/90/7f/8d04f77627589987b072f6e58c8b3898f9b3ab1ce0af7a96e5018c21ab77/laktory-0.6.2.tar.gz",
    "platform": null,
    "description": "# Laktory\n\n[![pypi](https://img.shields.io/pypi/v/laktory.svg)](https://pypi.org/project/laktory/)\n[![test](https://github.com/okube-ai/laktory/actions/workflows/test.yml/badge.svg)](https://github.com/okube-ai/laktory/actions/workflows/test.yml)\n[![downloads](https://static.pepy.tech/badge/laktory/month)](https://pepy.tech/project/laktory)\n[![versions](https://img.shields.io/pypi/pyversions/laktory.svg)](https://github.com/okube-ai/laktory)\n[![license](https://img.shields.io/github/license/okube-ai/laktory.svg)](https://github.com/okube-ai/laktory/blob/main/LICENSE)\n\nAn open-source DataOps and dataframe-centric ETL framework for building \nlakehouses.\n\n<img src=\"docs/images/logo_sg.png\" alt=\"laktory logo\" width=\"85\"/>\n\nLaktory is your all-in-one solution for defining both data transformations and \nDatabricks resources. Imagine if Terraform, Databricks Asset Bundles, and dbt\ncombined forces\u2014that\u2019s essentially Laktory.\n\nThis open-source framework simplifies the creation, deployment, and execution \nof data pipelines while adhering to essential DevOps practices like version \ncontrol, code reviews, and CI/CD integration. With Apache Spark and Polars\ndriving its data transformation, Laktory ensures reliable and scalable data\nprocessing. Its modular, flexible approach allows you to seamlessly combine SQL\nstatements with DataFrame operations.\n\n<img src=\"docs/images/laktory_diagram.png\" alt=\"what is laktory\" width=\"800\"/>\n\nSince Laktory pipelines are built on top of Spark and Polars, they can run in\nany environment that supports python\u2014from your local machine to a Kubernetes \ncluster. They can also be deployed and orchestrated as Databricks Jobs or\n[Delta Live Tables](https://www.databricks.com/product/delta-live-tables),\noffering a simple, fully managed, and low-maintenance solution.\n\nBut Laktory goes beyond data pipelines. It empowers you to define and deploy \nyour entire Databricks data platform\u2014from Unity Catalog and access grants\nto compute and quality monitoring\u2014providing a complete, modern solution\nfor data platform management. This empowers your data team to take full \nownership of the solution, eliminating the need to juggle multiple technologies.\nSay goodbye to relying on external Terraform experts to handle compute, workspace\nconfiguration, and Unity Catalog, while your data engineers and analysts try \nto combine Databricks Asset Bundles and dbt to build data pipelines. Laktory\nconsolidates these functions, simplifying the entire process and reducing\nthe overall cost.\n\n<img src=\"docs/images/why_simplicity.png\" alt=\"dataops\" width=\"500\"/>\n\n\n## Help\nSee [documentation](https://www.laktory.ai/) for more details.\n\n## Installation\nInstall using \n```commandline\npip install laktory\n```\n\nFor more installation options,\nsee the [Install](https://www.laktory.ai/install/) section in the documentation.\n\n## A Basic Example\n```py\nfrom laktory import models\n\n\nnode_brz = models.PipelineNode(\n    name=\"brz_stock_prices\",\n    source={\n        \"format\": \"PARQUET\",\n        \"path\": \"./data/brz_stock_prices/\"\n    },\n    transformer={\n        \"nodes\": [\n        ]\n    }\n)\n\nnode_slv = models.PipelineNode(\n    name=\"slv_stock_prices\",\n    source={\n        \"node_name\": \"brz_stock_prices\"\n    },\n    sinks=[{\n        \"path\": \"./data/slv_stock_prices\",\n        \"mode\": \"OVERWRITE\",\n        \"format\": \"PARQUET\",\n    }],\n    transformer={\n        \"nodes\": [\n            \n            # SQL Transformation\n            {\n                \"sql_expr\": \"\"\"\n                    SELECT\n                      data.created_at AS created_at,\n                      data.symbol AS symbol,\n                      data.open AS open,\n                      data.close AS close,\n                      data.high AS high,\n                      data.low AS low,\n                      data.volume AS volume\n                    FROM\n                      {df}\n                \"\"\"   \n            },\n            \n            # Spark Transformation\n            {\n                \"func_name\": \"drop_duplicates\",\n                \"func_kwargs\": {\n                    \"subset\": [\"created_at\", \"symbol\"]\n                }\n            },\n        ]\n    }\n)\n\npipeline = models.Pipeline(\n    name=\"stock_prices\",\n    nodes=[node_brz, node_slv],\n)\n\nprint(pipeline)\n#> resource_name_=None options=ResourceOptions(variables={}, depends_on=[], provider=None, aliases=None, delete_before_replace=True, ignore_changes=None, import_=None, parent=None, replace_on_changes=None) variables={} databricks_job=None dlt=None name='stock_prices' nodes=[PipelineNode(variables={}, add_layer_columns=True, dlt_template='DEFAULT', description=None, drop_duplicates=None, drop_source_columns=False, transformer=SparkChain(variables={}, nodes=[SparkChainNode(variables={}, allow_missing_column_args=False, column=None, spark_func_args=[SparkFuncArg(variables={}, value='symbol'), SparkFuncArg(variables={}, value='timestamp'), SparkFuncArg(variables={}, value='open'), SparkFuncArg(variables={}, value='close')], spark_func_kwargs={}, spark_func_name='select', sql_expression=None)]), expectations=[], layer='BRONZE', name='brz_stock_prices', primary_key=None, sink=None, source=FileDataSource(variables={}, as_stream=False, broadcast=False, cdc=None, dataframe_backend='SPARK', drops=None, filter=None, mock_df=None, renames=None, selects=None, watermark=None, format='PARQUET', header=True, multiline=False, path='./data/brz_stock_prices/', read_options={}, schema_location=None), timestamp_key=None), PipelineNode(variables={}, add_layer_columns=True, dlt_template='DEFAULT', description=None, drop_duplicates=None, drop_source_columns=True, transformer=SparkChain(variables={}, nodes=[SparkChainNode(variables={}, allow_missing_column_args=False, column=None, spark_func_args=[], spark_func_kwargs={'subset': SparkFuncArg(variables={}, value=['timestamp', 'symbol'])}, spark_func_name='drop_duplicates', sql_expression=None)]), expectations=[], layer='SILVER', name='slv_stock_prices', primary_key=None, sink=FileDataSink(variables={}, mode='OVERWRITE', checkpoint_location=None, format='PARQUET', path='./data/slv_stock_prices', write_options={}), source=PipelineNodeDataSource(variables={}, as_stream=False, broadcast=False, cdc=None, dataframe_backend='SPARK', drops=None, filter=None, mock_df=None, renames=None, selects=None, watermark=None, node_name='brz_stock_prices', node=PipelineNode(variables={}, add_layer_columns=True, dlt_template='DEFAULT', description=None, drop_duplicates=None, drop_source_columns=False, transformer=SparkChain(variables={}, nodes=[SparkChainNode(variables={}, allow_missing_column_args=False, column=None, spark_func_args=[SparkFuncArg(variables={}, value='symbol'), SparkFuncArg(variables={}, value='timestamp'), SparkFuncArg(variables={}, value='open'), SparkFuncArg(variables={}, value='close')], spark_func_kwargs={}, spark_func_name='select', sql_expression=None)]), expectations=[], layer='BRONZE', name='brz_stock_prices', primary_key=None, sink=None, source=FileDataSource(variables={}, as_stream=False, broadcast=False, cdc=None, dataframe_backend='SPARK', drops=None, filter=None, mock_df=None, renames=None, selects=None, watermark=None, format='PARQUET', header=True, multiline=False, path='./data/brz_stock_prices/', read_options={}, schema_location=None), timestamp_key=None)), timestamp_key=None)] orchestrator=None udfs=[]\n\npipeline.execute(spark=spark)\n```\n\nTo get started with a more useful example, jump into the [Quickstart](https://www.laktory.ai/quickstart/).\n\n## Get Involved\nLaktory is growing rapidly, and we\u2019d love for you to be part of our journey! Here\u2019s how \nyou can get involved:\n- **Join the Community**: Connect with fellow Laktory users and contributors on our [Slack](http://okube.slack.com/). Share ideas, ask questions, and collaborate!\n- **Suggest Features or Report Issues**: Have an idea for a new feature or encountering an issue? Let us know on [GitHub Issues](https://github.com/okube-ai/laktory/issues). Your feedback helps shape the future of Laktory!\n- Contribute to Laktory: Check out our [contributing guide](CONTRIBUTING.md)  to learn how you can tackle issues and add value to the project.\n\n## A Lakehouse DataOps Template\nA comprehensive template on how to deploy a lakehouse as code using Laktory is maintained here:\nhttps://github.com/okube-ai/lakehouse-as-code.\n\nIn this template, 4 pulumi projects are used to:\n- `{cloud_provider}_infra`: Deploy the required resources on your cloud provider\n- `unity-catalog`: Setup users, groups, catalogs, schemas and manage grants\n- `workspace`: Setup secrets, clusters and warehouses and common files/notebooks\n- `workflows`: The data workflows to build your lakehouse\n\n## Okube Company\n<img src=\"docs/images/okube.png\" alt=\"okube logo\" width=\"85\"/>\n\n[Okube](https://www.okube.ai) is dedicated to building open source frameworks, known as the *kubes*, empowering businesses to build, deploy and operate highly scalable data platforms and AI models. \n\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 okube  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "An ETL and DataOps framework for building a lakehouse",
    "version": "0.6.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/opencubes-ai/laktory/issues",
        "Documentation": "https://www.laktory.ai",
        "Homepage": "https://github.com/okube-ai/laktory",
        "Repository": "https://github.com/okube-ai/laktory"
    },
    "split_keywords": [
        "apache-spark",
        " data-pipeline",
        " dataframes",
        " etl",
        " infrastructure-as-code",
        " polars",
        " python",
        " sql"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b7d61855dca7b9a409b5cff70199fc039ec2005eeea85bc9f8449bda3957487f",
                "md5": "df33c3a3db53afcb9fec36a2d27923af",
                "sha256": "6f5dc098ad6d81e20a3d26daeac06bc96ade264a0b6f6c31237feab8d5037dd5"
            },
            "downloads": -1,
            "filename": "laktory-0.6.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "df33c3a3db53afcb9fec36a2d27923af",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 649622,
            "upload_time": "2025-01-22T03:28:32",
            "upload_time_iso_8601": "2025-01-22T03:28:32.866323Z",
            "url": "https://files.pythonhosted.org/packages/b7/d6/1855dca7b9a409b5cff70199fc039ec2005eeea85bc9f8449bda3957487f/laktory-0.6.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "907f8d04f77627589987b072f6e58c8b3898f9b3ab1ce0af7a96e5018c21ab77",
                "md5": "9e3e1e22e09e20d5b879ff6b3db42a62",
                "sha256": "7a5619c68461d8d7e666c8071d38f7d959988ea99abd698e4c1c3f39a18ad64a"
            },
            "downloads": -1,
            "filename": "laktory-0.6.2.tar.gz",
            "has_sig": false,
            "md5_digest": "9e3e1e22e09e20d5b879ff6b3db42a62",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 534772,
            "upload_time": "2025-01-22T03:28:35",
            "upload_time_iso_8601": "2025-01-22T03:28:35.343369Z",
            "url": "https://files.pythonhosted.org/packages/90/7f/8d04f77627589987b072f6e58c8b3898f9b3ab1ce0af7a96e5018c21ab77/laktory-0.6.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-22 03:28:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "opencubes-ai",
    "github_project": "laktory",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "laktory"
}
        
Elapsed time: 0.61323s