# spark_dummy_tools
[![Github License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Updates](https://pyup.io/repos/github/woctezuma/google-colab-transfer/shield.svg)](pyup)
[![Python 3](https://pyup.io/repos/github/woctezuma/google-colab-transfer/python-3-shield.svg)](pyup)
[![Code coverage](https://codecov.io/gh/woctezuma/google-colab-transfer/branch/master/graph/badge.svg)](codecov)
spark_dummy_tools is a Python library that implements for dummy table
## Installation
The code is packaged for PyPI, so that the installation consists in running:
```sh
pip install spark-dummy-tools --user
```
## Usage
wrapper take Dummy
```sh
from spark_dummy_tools import generated_dummy_table_artifactory
from spark_dummy_tools import generated_dummy_table_datum
import spark_dataframe_tools
Generated Dummy Table Datum
============================================================
path = "fields_pe_datum2.csv"
table_name = "t_kctk_collateralization_atrb"
storage_zone = "master"
sample_parquet = 10
columns_integer_default={}
columns_date_default={"gf_cutoff_date":"2026-01-01"}
columns_string_default={}
columns_decimal_default={"other_concepts_amount":"500.00"}
partition_colum=["gf_cutoff_date"]
generated_dummy_table_datum(spark=spark,
path=path,
table_name=table_name,
storage_zone=storage_zone,
sample_parquet=sample_parquet,
partition_colum=partition_colum
columns_integer_default=columns_integer_default,
columns_date_default=columns_date_default,
columns_string_default=columns_string_default,
columns_decimal_default=columns_decimal_default
)
Generated Dummy Table Artifactory
============================================================
sample_parquet = 10
table_name = ""
env = "work"
phase = "master"
code_country = "pe"
is_uuaa_tag = False
is_sandbox = False
token_artifactory = ""
partition_colum = None
columns_integer_default={}
columns_date_default={"gf_cutoff_date":"2026-01-01"}
columns_string_default={}
columns_decimal_default={"other_concepts_amount":"500.00"}
generated_dummy_table_artifactory(spark=spark,
table_name=table_name,
env=env,
phase=phase,
code_country=code_country,
is_uuaa_tag=is_uuaa_tag,
is_sandbox=is_sandbox,
token_artifactory=token_artifactory,
partition_colum=partition_colum,
sample_parquet=sample_parquet,
columns_integer_default=columns_integer_default,
columns_date_default=columns_date_default,
columns_string_default=columns_string_default,
columns_decimal_default=columns_decimal_default
)
import os, sys
is_windows = sys.platform.startswith('win')
path_directory = os.path.join("DIRECTORY_DUMMY", table_name)
if is_windows:
path_directory = path_directory.replace("\\", "/")
df = spark.read.parquet(path_directory)
df.show2(10)
```
## License
[Apache License 2.0](https://www.dropbox.com/s/8t6xtgk06o3ij61/LICENSE?dl=0).
## New features v1.0
## BugFix
- choco install visualcpp-build-tools
## Reference
- Jonathan Quiza [github](https://github.com/jonaqp).
- Jonathan Quiza [RumiMLSpark](http://rumi-ml.herokuapp.com/).
- Jonathan Quiza [linkedin](https://www.linkedin.com/in/jonaqp/).
Raw data
{
"_id": null,
"home_page": "https://github.com/jonaqp/spark_dummy_tools/",
"name": "spark-dummy-tools",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "spark, datax, schema",
"author": "Jonathan Quiza",
"author_email": "jony327@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/2f/b9/4a8ade7df9fe73969c90c1e1d1943c04de0c7bf9fd01af1c4ffef9dc2b34/spark_dummy_tools-0.8.3.tar.gz",
"platform": null,
"description": "# spark_dummy_tools\r\n\r\n\r\n[![Github License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\r\n[![Updates](https://pyup.io/repos/github/woctezuma/google-colab-transfer/shield.svg)](pyup)\r\n[![Python 3](https://pyup.io/repos/github/woctezuma/google-colab-transfer/python-3-shield.svg)](pyup)\r\n[![Code coverage](https://codecov.io/gh/woctezuma/google-colab-transfer/branch/master/graph/badge.svg)](codecov)\r\n\r\n\r\n\r\n\r\nspark_dummy_tools is a Python library that implements for dummy table\r\n## Installation\r\n\r\nThe code is packaged for PyPI, so that the installation consists in running:\r\n```sh\r\npip install spark-dummy-tools --user \r\n```\r\n\r\n\r\n## Usage\r\n\r\nwrapper take Dummy\r\n\r\n```sh\r\n\r\nfrom spark_dummy_tools import generated_dummy_table_artifactory\r\nfrom spark_dummy_tools import generated_dummy_table_datum\r\nimport spark_dataframe_tools\r\n\r\n\r\nGenerated Dummy Table Datum\r\n============================================================\r\npath = \"fields_pe_datum2.csv\"\r\ntable_name = \"t_kctk_collateralization_atrb\"\r\nstorage_zone = \"master\"\r\nsample_parquet = 10\r\ncolumns_integer_default={}\r\ncolumns_date_default={\"gf_cutoff_date\":\"2026-01-01\"}\r\ncolumns_string_default={}\r\ncolumns_decimal_default={\"other_concepts_amount\":\"500.00\"}\r\npartition_colum=[\"gf_cutoff_date\"]\r\n\r\ngenerated_dummy_table_datum(spark=spark,\r\n path=path,\r\n table_name=table_name,\r\n storage_zone=storage_zone,\r\n sample_parquet=sample_parquet,\r\n partition_colum=partition_colum\r\n columns_integer_default=columns_integer_default,\r\n columns_date_default=columns_date_default,\r\n columns_string_default=columns_string_default,\r\n columns_decimal_default=columns_decimal_default\r\n )\r\n \r\n\r\n\r\n\r\nGenerated Dummy Table Artifactory\r\n============================================================\r\nsample_parquet = 10\r\ntable_name = \"\"\r\nenv = \"work\"\r\nphase = \"master\"\r\ncode_country = \"pe\"\r\nis_uuaa_tag = False\r\nis_sandbox = False\r\ntoken_artifactory = \"\"\r\npartition_colum = None\r\ncolumns_integer_default={}\r\ncolumns_date_default={\"gf_cutoff_date\":\"2026-01-01\"}\r\ncolumns_string_default={}\r\ncolumns_decimal_default={\"other_concepts_amount\":\"500.00\"}\r\n\r\ngenerated_dummy_table_artifactory(spark=spark,\r\n table_name=table_name,\r\n env=env,\r\n phase=phase,\r\n code_country=code_country,\r\n is_uuaa_tag=is_uuaa_tag,\r\n is_sandbox=is_sandbox,\r\n token_artifactory=token_artifactory,\r\n partition_colum=partition_colum,\r\n sample_parquet=sample_parquet,\r\n columns_integer_default=columns_integer_default,\r\n columns_date_default=columns_date_default,\r\n columns_string_default=columns_string_default,\r\n columns_decimal_default=columns_decimal_default\r\n )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nimport os, sys\r\nis_windows = sys.platform.startswith('win')\r\npath_directory = os.path.join(\"DIRECTORY_DUMMY\", table_name)\r\nif is_windows:\r\n path_directory = path_directory.replace(\"\\\\\", \"/\")\r\n \r\n\r\ndf = spark.read.parquet(path_directory)\r\ndf.show2(10)\r\n \r\n```\r\n\r\n\r\n\r\n## License\r\n\r\n[Apache License 2.0](https://www.dropbox.com/s/8t6xtgk06o3ij61/LICENSE?dl=0).\r\n\r\n\r\n## New features v1.0\r\n\r\n \r\n## BugFix\r\n- choco install visualcpp-build-tools\r\n\r\n\r\n\r\n## Reference\r\n\r\n - Jonathan Quiza [github](https://github.com/jonaqp).\r\n - Jonathan Quiza [RumiMLSpark](http://rumi-ml.herokuapp.com/).\r\n - Jonathan Quiza [linkedin](https://www.linkedin.com/in/jonaqp/).\r\n\r\n\r\n",
"bugtrack_url": null,
"license": null,
"summary": "spark_dummy_tools",
"version": "0.8.3",
"project_urls": {
"Download": "https://github.com/jonaqp/spark_dummy_tools/archive/main.zip",
"Homepage": "https://github.com/jonaqp/spark_dummy_tools/"
},
"split_keywords": [
"spark",
" datax",
" schema"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bca5b2aa8aed2547cb3f99af01b64834ec870509042baa98af275e32c03d9149",
"md5": "52fe8a79b4ac636fc3befaa4ef4db4cd",
"sha256": "09425cc53343462c38f7864e31f04fe42252db910592e4eea7745464044221db"
},
"downloads": -1,
"filename": "spark_dummy_tools-0.8.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "52fe8a79b4ac636fc3befaa4ef4db4cd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 6738,
"upload_time": "2024-04-12T08:13:15",
"upload_time_iso_8601": "2024-04-12T08:13:15.325310Z",
"url": "https://files.pythonhosted.org/packages/bc/a5/b2aa8aed2547cb3f99af01b64834ec870509042baa98af275e32c03d9149/spark_dummy_tools-0.8.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2fb94a8ade7df9fe73969c90c1e1d1943c04de0c7bf9fd01af1c4ffef9dc2b34",
"md5": "33386304c14d59ab4af87cfddf3dbd0f",
"sha256": "dd252be9f269d79db72718c8e38846b998b0433da97b9b965c4084fb0be90de2"
},
"downloads": -1,
"filename": "spark_dummy_tools-0.8.3.tar.gz",
"has_sig": false,
"md5_digest": "33386304c14d59ab4af87cfddf3dbd0f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 7376,
"upload_time": "2024-04-12T08:13:16",
"upload_time_iso_8601": "2024-04-12T08:13:16.994736Z",
"url": "https://files.pythonhosted.org/packages/2f/b9/4a8ade7df9fe73969c90c1e1d1943c04de0c7bf9fd01af1c4ffef9dc2b34/spark_dummy_tools-0.8.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-12 08:13:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jonaqp",
"github_project": "spark_dummy_tools",
"github_not_found": true,
"lcname": "spark-dummy-tools"
}