dbldatagen


Namedbldatagen JSON
Version 0.4.0.post1 PyPI version JSON
download
home_pagehttps://github.com/databrickslabs/data-generator
SummaryDatabricks Labs - PySpark Synthetic Data Generator
upload_time2024-07-26 06:02:49
maintainerNone
docs_urlNone
authorRonan Stokes, Databricks
requires_python>=3.8.10
licenseDatabricks License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # Databricks Labs Data Generator (`dbldatagen`) 

<!-- Top bar will be removed from PyPi packaged versions -->

[![build](https://github.com/databrickslabs/dbldatagen/workflows/build/badge.svg?branch=master)](https://github.com/databrickslabs/dbldatagen/actions?query=workflow%3Abuild+branch%3Amaster)
[![PyPi package](https://img.shields.io/pypi/v/dbldatagen?color=green)](https://pypi.org/project/dbldatagen/)
[![codecov](https://codecov.io/gh/databrickslabs/dbldatagen/branch/master/graph/badge.svg)](https://codecov.io/gh/databrickslabs/dbldatagen)
[![PyPi downloads](https://img.shields.io/pypi/dm/dbldatagen?label=PyPi%20Downloads)](https://pypistats.org/packages/dbldatagen)
[![lines of code](https://tokei.rs/b1/github/databrickslabs/dbldatagen)]([https://codecov.io/github/databrickslabs/dbldatagen](https://github.com/databrickslabs/dbldatagen))

<!-- 
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/databrickslabs/dbldatagen.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/databrickslabs/dbldatagen/context:python)
[![downloads](https://img.shields.io/github/downloads/databrickslabs/dbldatagen/total.svg)](https://hanadigital.github.io/grev/?user=databrickslabs&repo=dbldatagen)
-->

## Project Description
The `dbldatagen` Databricks Labs project is a Python library for generating synthetic data within the Databricks 
environment using Spark. The generated data may be used for testing, benchmarking, demos, and many 
other uses.

It operates by defining a data generation specification in code that controls 
how the synthetic data is generated.
The specification may incorporate the use of existing schemas or create data in an ad-hoc fashion.

It has no dependencies on any libraries that are not already installed in the Databricks 
runtime, and you can use it from Scala, R or other languages by defining
a view over the generated data.

### Feature Summary
It supports:
* Generating synthetic data at scale up to billions of rows within minutes using appropriately sized clusters 
* Generating repeatable, predictable data supporting the need for producing multiple tables, Change Data Capture, 
merge and join scenarios with consistency between primary and foreign keys
* Generating synthetic data for all of the 
Spark SQL supported primitive types as a Spark data frame which may be persisted, 
saved to external storage or 
used in other computations
* Generating ranges of dates, timestamps, and numeric values
* Generation of discrete values - both numeric and text
* Generation of values at random and based on the values of other fields 
(either based on the `hash` of the underlying values or the values themselves)
* Ability to specify a distribution for random data generation 
* Generating arrays of values for ML-style feature arrays
* Applying weights to the occurrence of values
* Generating values to conform to a schema or independent of an existing schema
* use of SQL expressions in synthetic data generation
* plugin mechanism to allow use of 3rd party libraries such as Faker
* Use within a Databricks Delta Live Tables pipeline as a synthetic data generation source
* Generate synthetic data generation code from existing schema or data (experimental)
* Use of standard datasets for quick generation of synthetic data

Details of these features can be found in the online documentation  -
 [online documentation](https://databrickslabs.github.io/dbldatagen/public_docs/index.html). 

## Documentation

Please refer to the [online documentation](https://databrickslabs.github.io/dbldatagen/public_docs/index.html) for 
details of use and many examples.

Release notes and details of the latest changes for this specific release
can be found in the GitHub repository
[here](https://github.com/databrickslabs/dbldatagen/blob/release/v0.4.0post1/CHANGELOG.md)

# Installation

Use `pip install dbldatagen` to install the PyPi package.

Within a Databricks notebook, invoke the following in a notebook cell
```commandline
%pip install dbldatagen
```

The Pip install command can be invoked within a Databricks notebook, a Delta Live Tables pipeline 
and even works on the Databricks community edition.

The documentation [installation notes](https://databrickslabs.github.io/dbldatagen/public_docs/installation_notes.html) 
contains details of installation using alternative mechanisms.

## Compatibility 
The Databricks Labs Data Generator framework can be used with Pyspark 3.1.2 and Python 3.8 or later. These are 
compatible with the Databricks runtime 10.4 LTS and later releases. For full Unity Catalog support, 
we recommend using Databricks runtime 13.2 or later (Databricks 13.3 LTS or above preferred)

For full library compatibility for a specific Databricks Spark release, see the Databricks 
release notes for library compatibility

- https://docs.databricks.com/release-notes/runtime/releases.html

When using the Databricks Labs Data Generator on "Unity Catalog" enabled Databricks environments, 
the Data Generator requires the use of `Single User` or `No Isolation Shared` access modes when using Databricks 
runtimes prior to release 13.2. This is because some needed features are not available in `Shared` 
mode (for example, use of 3rd party libraries, use of Python UDFs) in these releases. 
Depending on settings, the `Custom` access mode may be supported.

The use of Unity Catalog `Shared` access mode is supported in Databricks runtimes from Databricks runtime release 13.2
onwards.

See the following documentation for more information:

- https://docs.databricks.com/data-governance/unity-catalog/compute.html

## Using the Data Generator
To use the data generator, install the library using the `%pip install` method or install the Python wheel directly 
in your environment.

Once the library has been installed, you can use it to generate a data frame composed of synthetic data.

The easiest way to use the data generator is to use one of the standard datasets which can be further customized
for your use case.

```buildoutcfg
import dbldatagen as dg
df = dg.Datasets(spark, "basic/user").get(rows=1000_000).build()
num_rows=df.count()                          
```

You can also define fully custom data sets using the `DataGenerator` class.

For example

```buildoutcfg
import dbldatagen as dg
from pyspark.sql.types import IntegerType, FloatType, StringType
column_count = 10
data_rows = 1000 * 1000
df_spec = (dg.DataGenerator(spark, name="test_data_set1", rows=data_rows,
                                                  partitions=4)
           .withIdOutput()
           .withColumn("r", FloatType(), 
                            expr="floor(rand() * 350) * (86400 + 3600)",
                            numColumns=column_count)
           .withColumn("code1", IntegerType(), minValue=100, maxValue=200)
           .withColumn("code2", IntegerType(), minValue=0, maxValue=10)
           .withColumn("code3", StringType(), values=['a', 'b', 'c'])
           .withColumn("code4", StringType(), values=['a', 'b', 'c'], 
                          random=True)
           .withColumn("code5", StringType(), values=['a', 'b', 'c'], 
                          random=True, weights=[9, 1, 1])
 
           )
                            
df = df_spec.build()
num_rows=df.count()                          
```
Refer to the [online documentation](https://databrickslabs.github.io/dbldatagen/public_docs/index.html) for further 
examples. 

The GitHub repository also contains further examples in the examples directory.

## Spark and Databricks Runtime Compatibility
The `dbldatagen` package is intended to be compatible with recent LTS versions of the Databricks runtime, including 
older LTS versions at least from 10.4 LTS and later. It also aims to be compatible with Delta Live Table runtimes, 
including `current` and `preview`. 

While we don't specifically drop support for older runtimes, changes in Pyspark APIs or
APIs from dependent packages such as `numpy`, `pandas`, `pyarrow`, and `pyparsing` make cause issues with older
runtimes. 

By design, installing `dbldatagen` does not install releases of dependent packages in order 
to preserve the curated set of packages pre-installed in any Databricks runtime environment.

When building on local environments, the build process uses the `Pipfile` and requirements files to determine 
the package versions for releases and unit tests. 

## Project Support
Please note that all projects released under [`Databricks Labs`](https://www.databricks.com/learn/labs)
 are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements 
(SLAs).  They are provided AS-IS, and we do not make any guarantees of any kind.  Please do not submit a support ticket 
relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as issues on the GitHub Repo.  
They will be reviewed as time permits, but there are no formal SLAs for support.


## Feedback

Issues with the application?  Found a bug?  Have a great idea for an addition?
Feel free to file an [issue](https://github.com/databrickslabs/dbldatagen/issues/new).


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/databrickslabs/data-generator",
    "name": "dbldatagen",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Ronan Stokes, Databricks",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/d9/b7/32bd9978910e64ed3452743c4d0d1e7192f34fa9b3ea28d29ce299afcab2/dbldatagen-0.4.0.post1.tar.gz",
    "platform": null,
    "description": "# Databricks Labs Data Generator (`dbldatagen`) \n\n<!-- Top bar will be removed from PyPi packaged versions -->\n\n[![build](https://github.com/databrickslabs/dbldatagen/workflows/build/badge.svg?branch=master)](https://github.com/databrickslabs/dbldatagen/actions?query=workflow%3Abuild+branch%3Amaster)\n[![PyPi package](https://img.shields.io/pypi/v/dbldatagen?color=green)](https://pypi.org/project/dbldatagen/)\n[![codecov](https://codecov.io/gh/databrickslabs/dbldatagen/branch/master/graph/badge.svg)](https://codecov.io/gh/databrickslabs/dbldatagen)\n[![PyPi downloads](https://img.shields.io/pypi/dm/dbldatagen?label=PyPi%20Downloads)](https://pypistats.org/packages/dbldatagen)\n[![lines of code](https://tokei.rs/b1/github/databrickslabs/dbldatagen)]([https://codecov.io/github/databrickslabs/dbldatagen](https://github.com/databrickslabs/dbldatagen))\n\n<!-- \n[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/databrickslabs/dbldatagen.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/databrickslabs/dbldatagen/context:python)\n[![downloads](https://img.shields.io/github/downloads/databrickslabs/dbldatagen/total.svg)](https://hanadigital.github.io/grev/?user=databrickslabs&repo=dbldatagen)\n-->\n\n## Project Description\nThe `dbldatagen` Databricks Labs project is a Python library for generating synthetic data within the Databricks \nenvironment using Spark. The generated data may be used for testing, benchmarking, demos, and many \nother uses.\n\nIt operates by defining a data generation specification in code that controls \nhow the synthetic data is generated.\nThe specification may incorporate the use of existing schemas or create data in an ad-hoc fashion.\n\nIt has no dependencies on any libraries that are not already installed in the Databricks \nruntime, and you can use it from Scala, R or other languages by defining\na view over the generated data.\n\n### Feature Summary\nIt supports:\n* Generating synthetic data at scale up to billions of rows within minutes using appropriately sized clusters \n* Generating repeatable, predictable data supporting the need for producing multiple tables, Change Data Capture, \nmerge and join scenarios with consistency between primary and foreign keys\n* Generating synthetic data for all of the \nSpark SQL supported primitive types as a Spark data frame which may be persisted, \nsaved to external storage or \nused in other computations\n* Generating ranges of dates, timestamps, and numeric values\n* Generation of discrete values - both numeric and text\n* Generation of values at random and based on the values of other fields \n(either based on the `hash` of the underlying values or the values themselves)\n* Ability to specify a distribution for random data generation \n* Generating arrays of values for ML-style feature arrays\n* Applying weights to the occurrence of values\n* Generating values to conform to a schema or independent of an existing schema\n* use of SQL expressions in synthetic data generation\n* plugin mechanism to allow use of 3rd party libraries such as Faker\n* Use within a Databricks Delta Live Tables pipeline as a synthetic data generation source\n* Generate synthetic data generation code from existing schema or data (experimental)\n* Use of standard datasets for quick generation of synthetic data\n\nDetails of these features can be found in the online documentation  -\n [online documentation](https://databrickslabs.github.io/dbldatagen/public_docs/index.html). \n\n## Documentation\n\nPlease refer to the [online documentation](https://databrickslabs.github.io/dbldatagen/public_docs/index.html) for \ndetails of use and many examples.\n\nRelease notes and details of the latest changes for this specific release\ncan be found in the GitHub repository\n[here](https://github.com/databrickslabs/dbldatagen/blob/release/v0.4.0post1/CHANGELOG.md)\n\n# Installation\n\nUse `pip install dbldatagen` to install the PyPi package.\n\nWithin a Databricks notebook, invoke the following in a notebook cell\n```commandline\n%pip install dbldatagen\n```\n\nThe Pip install command can be invoked within a Databricks notebook, a Delta Live Tables pipeline \nand even works on the Databricks community edition.\n\nThe documentation [installation notes](https://databrickslabs.github.io/dbldatagen/public_docs/installation_notes.html) \ncontains details of installation using alternative mechanisms.\n\n## Compatibility \nThe Databricks Labs Data Generator framework can be used with Pyspark 3.1.2 and Python 3.8 or later. These are \ncompatible with the Databricks runtime 10.4 LTS and later releases. For full Unity Catalog support, \nwe recommend using Databricks runtime 13.2 or later (Databricks 13.3 LTS or above preferred)\n\nFor full library compatibility for a specific Databricks Spark release, see the Databricks \nrelease notes for library compatibility\n\n- https://docs.databricks.com/release-notes/runtime/releases.html\n\nWhen using the Databricks Labs Data Generator on \"Unity Catalog\" enabled Databricks environments, \nthe Data Generator requires the use of `Single User` or `No Isolation Shared` access modes when using Databricks \nruntimes prior to release 13.2. This is because some needed features are not available in `Shared` \nmode (for example, use of 3rd party libraries, use of Python UDFs) in these releases. \nDepending on settings, the `Custom` access mode may be supported.\n\nThe use of Unity Catalog `Shared` access mode is supported in Databricks runtimes from Databricks runtime release 13.2\nonwards.\n\nSee the following documentation for more information:\n\n- https://docs.databricks.com/data-governance/unity-catalog/compute.html\n\n## Using the Data Generator\nTo use the data generator, install the library using the `%pip install` method or install the Python wheel directly \nin your environment.\n\nOnce the library has been installed, you can use it to generate a data frame composed of synthetic data.\n\nThe easiest way to use the data generator is to use one of the standard datasets which can be further customized\nfor your use case.\n\n```buildoutcfg\nimport dbldatagen as dg\ndf = dg.Datasets(spark, \"basic/user\").get(rows=1000_000).build()\nnum_rows=df.count()                          \n```\n\nYou can also define fully custom data sets using the `DataGenerator` class.\n\nFor example\n\n```buildoutcfg\nimport dbldatagen as dg\nfrom pyspark.sql.types import IntegerType, FloatType, StringType\ncolumn_count = 10\ndata_rows = 1000 * 1000\ndf_spec = (dg.DataGenerator(spark, name=\"test_data_set1\", rows=data_rows,\n                                                  partitions=4)\n           .withIdOutput()\n           .withColumn(\"r\", FloatType(), \n                            expr=\"floor(rand() * 350) * (86400 + 3600)\",\n                            numColumns=column_count)\n           .withColumn(\"code1\", IntegerType(), minValue=100, maxValue=200)\n           .withColumn(\"code2\", IntegerType(), minValue=0, maxValue=10)\n           .withColumn(\"code3\", StringType(), values=['a', 'b', 'c'])\n           .withColumn(\"code4\", StringType(), values=['a', 'b', 'c'], \n                          random=True)\n           .withColumn(\"code5\", StringType(), values=['a', 'b', 'c'], \n                          random=True, weights=[9, 1, 1])\n \n           )\n                            \ndf = df_spec.build()\nnum_rows=df.count()                          \n```\nRefer to the [online documentation](https://databrickslabs.github.io/dbldatagen/public_docs/index.html) for further \nexamples. \n\nThe GitHub repository also contains further examples in the examples directory.\n\n## Spark and Databricks Runtime Compatibility\nThe `dbldatagen` package is intended to be compatible with recent LTS versions of the Databricks runtime, including \nolder LTS versions at least from 10.4 LTS and later. It also aims to be compatible with Delta Live Table runtimes, \nincluding `current` and `preview`. \n\nWhile we don't specifically drop support for older runtimes, changes in Pyspark APIs or\nAPIs from dependent packages such as `numpy`, `pandas`, `pyarrow`, and `pyparsing` make cause issues with older\nruntimes. \n\nBy design, installing `dbldatagen` does not install releases of dependent packages in order \nto preserve the curated set of packages pre-installed in any Databricks runtime environment.\n\nWhen building on local environments, the build process uses the `Pipfile` and requirements files to determine \nthe package versions for releases and unit tests. \n\n## Project Support\nPlease note that all projects released under [`Databricks Labs`](https://www.databricks.com/learn/labs)\n are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements \n(SLAs).  They are provided AS-IS, and we do not make any guarantees of any kind.  Please do not submit a support ticket \nrelating to any issues arising from the use of these projects.\n\nAny issues discovered through the use of this project should be filed as issues on the GitHub Repo.  \nThey will be reviewed as time permits, but there are no formal SLAs for support.\n\n\n## Feedback\n\nIssues with the application?  Found a bug?  Have a great idea for an addition?\nFeel free to file an [issue](https://github.com/databrickslabs/dbldatagen/issues/new).\n\n",
    "bugtrack_url": null,
    "license": "Databricks License",
    "summary": "Databricks Labs -  PySpark Synthetic Data Generator",
    "version": "0.4.0.post1",
    "project_urls": {
        "Databricks Labs": "https://www.databricks.com/learn/labs",
        "Documentation": "https://databrickslabs.github.io/dbldatagen/public_docs/index.html",
        "Homepage": "https://github.com/databrickslabs/data-generator"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "789f06d46d1d36d322deac92a59066deacb7983f77dd9d655e51a838fa5c7273",
                "md5": "5e0c3b1d42995df074157e45e8cc0a1f",
                "sha256": "b94b5fcf2bf5113fe789f5cdf92b50eb62b5e6c25fc867b634d6543cc1e79d40"
            },
            "downloads": -1,
            "filename": "dbldatagen-0.4.0.post1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5e0c3b1d42995df074157e45e8cc0a1f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.10",
            "size": 122756,
            "upload_time": "2024-07-26T06:02:47",
            "upload_time_iso_8601": "2024-07-26T06:02:47.640937Z",
            "url": "https://files.pythonhosted.org/packages/78/9f/06d46d1d36d322deac92a59066deacb7983f77dd9d655e51a838fa5c7273/dbldatagen-0.4.0.post1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d9b732bd9978910e64ed3452743c4d0d1e7192f34fa9b3ea28d29ce299afcab2",
                "md5": "e69b23bc00d8684f585edca99a68d09c",
                "sha256": "a254fba2a6384c75e2dfb38b1e8cdc1c52b417c59fd6ec977e11175ad7567f34"
            },
            "downloads": -1,
            "filename": "dbldatagen-0.4.0.post1.tar.gz",
            "has_sig": false,
            "md5_digest": "e69b23bc00d8684f585edca99a68d09c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.10",
            "size": 108377,
            "upload_time": "2024-07-26T06:02:49",
            "upload_time_iso_8601": "2024-07-26T06:02:49.338739Z",
            "url": "https://files.pythonhosted.org/packages/d9/b7/32bd9978910e64ed3452743c4d0d1e7192f34fa9b3ea28d29ce299afcab2/dbldatagen-0.4.0.post1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-26 06:02:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "databrickslabs",
    "github_project": "data-generator",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "dbldatagen"
}
        
Elapsed time: 0.33196s