pyspark-data-mocker


Namepyspark-data-mocker JSON
Version 2.0.1 PyPI version JSON
download
home_pagehttps://fedemgp.github.io
SummaryMock a datalake easily to be able to test your pyspark data application
upload_time2023-10-20 14:31:32
maintainer
docs_urlNone
authorFederico Gomez
requires_python>=3.8,<4.0
licenseGPL-3.0
keywords pyspark tests data mocker
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!--
# To improve the naming of the datalake and avoid refactor the project, move the basic datalake temporally
$ mv tests/data/basic_datalake/bar tests/data/basic_datalake/school
$ mv tests/data/basic_datalake/foo tests/data/basic_datalake/grades
-->
# pyspark-data-mocker
`pyspark-data-mocker` is a testing tool that facilitates the burden of setting up a desired datalake, so you can test
easily the behavior of your data application. It configures also the spark session to optimize it for testing
purpose.

## Install
```
pip install pyspark-data-mocker
```

## Usage
`pyspark-data-mocker` searches the directory you provide in order to seek and load files that can be interpreted as
tables, storing them inside the datalake. That datalake will contain certain databases depending on the folders
inside the root directory. For example, let's take a look into the `basic_datalake`

```bash
$ tree tests/data/basic_datalake -n --charset=ascii  # byexample: +rm=~ +skip
tests/data/basic_datalake
|-- grades
|   `-- exams.csv
`-- school
    |-- courses.csv
    `-- students.csv
~
2 directories, 3 files
```

This file hierarchy will be respected in the further datalake when loaded:  each sub-folder will be considered as
spark database, and each file will be loaded as table, using the filename to name the table.

How can we load them using `pyspark-data-mocker`? Really simple!

```python
>>> from pyspark_data_mocker import DataLakeBuilder
>>> builder = DataLakeBuilder.load_from_dir("./tests/data/basic_datalake")  # byexample: +timeout=20 +pass
```

And that's it! you will now have in that execution context a datalake with the structure defined in the folder
`basic_datalake`. Let's take a closer look by running some queries.

```python
>>> from pyspark.sql import SparkSession
>>> spark = SparkSession.builder.getOrCreate()
>>> spark.sql("SHOW DATABASES").show()
+---------+
|namespace|
+---------+
|  default|
|   grades|
|   school|
+---------+
```

We have the `default` database (which came for free when instantiating spark), and the two folders inside
`tests/data/basic_datalake`: `school` and `grades`.


```python
>>> spark.sql("SHOW TABLES IN school").show()
+---------+---------+-----------+
|namespace|tableName|isTemporary|
+---------+---------+-----------+
|   school|  courses|      false|
|   school| students|      false|
+---------+---------+-----------+

>>> spark.sql("SELECT * FROM school.courses").show()
+---+------------+
| id| course_name|
+---+------------+
|  1|Algorithms 1|
|  2|Algorithms 2|
|  3|  Calculus 1|
+---+------------+


>>> spark.table("school.students").show()
+---+----------+---------+--------------------+------+----------+
| id|first_name|last_name|               email|gender|birth_date|
+---+----------+---------+--------------------+------+----------+
|  1|  Shirleen|  Dunford|sdunford0@amazona...|Female|1978-08-01|
|  2|      Niko|  Puckrin|npuckrin1@shinyst...|  Male|2000-11-28|
|  3|    Sergei|   Barukh|sbarukh2@bizjourn...|  Male|1992-01-20|
|  4|       Sal|  Maidens|smaidens3@senate.gov|  Male|2003-12-14|
|  5|    Cooper|MacGuffie| cmacguffie4@ibm.com|  Male|2000-03-07|
+---+----------+---------+--------------------+------+----------+

```

Note how it is already filled with the data each CSV file has! The tool supports all kind of files: `csv`, `parquet`,
`json`. The application will infer which format to use by looking the file extension.

```python
>>> spark.sql("SHOW TABLES IN grades").show()
+---------+---------+-----------+
|namespace|tableName|isTemporary|
+---------+---------+-----------+
|   grades|    exams|      false|
+---------+---------+-----------+

>>> spark.table("grades.exams").show()
+---+----------+---------+----------+----+
| id|student_id|course_id|      date|note|
+---+----------+---------+----------+----+
|  1|         1|        1|2022-05-01|   9|
|  2|         2|        1|2022-05-08|   7|
|  3|         3|        1|2022-06-17|   4|
|  4|         1|        3|2023-05-12|   9|
|  5|         2|        3|2023-05-12|  10|
|  6|         3|        3|2022-12-07|   7|
|  7|         4|        3|2022-12-07|   4|
|  8|         5|        3|2022-12-07|   2|
|  9|         1|        2|2023-05-01|   5|
| 10|         2|        2|2023-05-07|   8|
+---+----------+---------+----------+----+

```

### Cleanup
You can easily clean the datalake by using the `cleanup` function

```python
>>> builder.cleanup()
>>> spark.sql("SHOW DATABASES").show()
+---------+
|namespace|
+---------+
|  default|
+---------+
```

## Documentation
You can check the full documentation to use all features available in `pyspark-data-mocker` [here](https://fedemgp.github.io/)

<!--
# Restore the previous state
$ mv tests/data/basic_datalake/school tests/data/basic_datalake/bar
$ mv tests/data/basic_datalake/grades tests/data/basic_datalake/foo
-->

            

Raw data

            {
    "_id": null,
    "home_page": "https://fedemgp.github.io",
    "name": "pyspark-data-mocker",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "pyspark,tests,data,mocker",
    "author": "Federico Gomez",
    "author_email": "fedemgp@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/f9/30/dcdd5f2d130fe339e4dbae31a394f72dea12f869a2391f8f23766ec07aff/pyspark_data_mocker-2.0.1.tar.gz",
    "platform": null,
    "description": "<!--\n# To improve the naming of the datalake and avoid refactor the project, move the basic datalake temporally\n$ mv tests/data/basic_datalake/bar tests/data/basic_datalake/school\n$ mv tests/data/basic_datalake/foo tests/data/basic_datalake/grades\n-->\n# pyspark-data-mocker\n`pyspark-data-mocker` is a testing tool that facilitates the burden of setting up a desired datalake, so you can test\neasily the behavior of your data application. It configures also the spark session to optimize it for testing\npurpose.\n\n## Install\n```\npip install pyspark-data-mocker\n```\n\n## Usage\n`pyspark-data-mocker` searches the directory you provide in order to seek and load files that can be interpreted as\ntables, storing them inside the datalake. That datalake will contain certain databases depending on the folders\ninside the root directory. For example, let's take a look into the `basic_datalake`\n\n```bash\n$ tree tests/data/basic_datalake -n --charset=ascii  # byexample: +rm=~ +skip\ntests/data/basic_datalake\n|-- grades\n|   `-- exams.csv\n`-- school\n    |-- courses.csv\n    `-- students.csv\n~\n2 directories, 3 files\n```\n\nThis file hierarchy will be respected in the further datalake when loaded:  each sub-folder will be considered as\nspark database, and each file will be loaded as table, using the filename to name the table.\n\nHow can we load them using `pyspark-data-mocker`? Really simple!\n\n```python\n>>> from pyspark_data_mocker import DataLakeBuilder\n>>> builder = DataLakeBuilder.load_from_dir(\"./tests/data/basic_datalake\")  # byexample: +timeout=20 +pass\n```\n\nAnd that's it! you will now have in that execution context a datalake with the structure defined in the folder\n`basic_datalake`. Let's take a closer look by running some queries.\n\n```python\n>>> from pyspark.sql import SparkSession\n>>> spark = SparkSession.builder.getOrCreate()\n>>> spark.sql(\"SHOW DATABASES\").show()\n+---------+\n|namespace|\n+---------+\n|  default|\n|   grades|\n|   school|\n+---------+\n```\n\nWe have the `default` database (which came for free when instantiating spark), and the two folders inside\n`tests/data/basic_datalake`: `school` and `grades`.\n\n\n```python\n>>> spark.sql(\"SHOW TABLES IN school\").show()\n+---------+---------+-----------+\n|namespace|tableName|isTemporary|\n+---------+---------+-----------+\n|   school|  courses|      false|\n|   school| students|      false|\n+---------+---------+-----------+\n\n>>> spark.sql(\"SELECT * FROM school.courses\").show()\n+---+------------+\n| id| course_name|\n+---+------------+\n|  1|Algorithms 1|\n|  2|Algorithms 2|\n|  3|  Calculus 1|\n+---+------------+\n\n\n>>> spark.table(\"school.students\").show()\n+---+----------+---------+--------------------+------+----------+\n| id|first_name|last_name|               email|gender|birth_date|\n+---+----------+---------+--------------------+------+----------+\n|  1|  Shirleen|  Dunford|sdunford0@amazona...|Female|1978-08-01|\n|  2|      Niko|  Puckrin|npuckrin1@shinyst...|  Male|2000-11-28|\n|  3|    Sergei|   Barukh|sbarukh2@bizjourn...|  Male|1992-01-20|\n|  4|       Sal|  Maidens|smaidens3@senate.gov|  Male|2003-12-14|\n|  5|    Cooper|MacGuffie| cmacguffie4@ibm.com|  Male|2000-03-07|\n+---+----------+---------+--------------------+------+----------+\n\n```\n\nNote how it is already filled with the data each CSV file has! The tool supports all kind of files: `csv`, `parquet`,\n`json`. The application will infer which format to use by looking the file extension.\n\n```python\n>>> spark.sql(\"SHOW TABLES IN grades\").show()\n+---------+---------+-----------+\n|namespace|tableName|isTemporary|\n+---------+---------+-----------+\n|   grades|    exams|      false|\n+---------+---------+-----------+\n\n>>> spark.table(\"grades.exams\").show()\n+---+----------+---------+----------+----+\n| id|student_id|course_id|      date|note|\n+---+----------+---------+----------+----+\n|  1|         1|        1|2022-05-01|   9|\n|  2|         2|        1|2022-05-08|   7|\n|  3|         3|        1|2022-06-17|   4|\n|  4|         1|        3|2023-05-12|   9|\n|  5|         2|        3|2023-05-12|  10|\n|  6|         3|        3|2022-12-07|   7|\n|  7|         4|        3|2022-12-07|   4|\n|  8|         5|        3|2022-12-07|   2|\n|  9|         1|        2|2023-05-01|   5|\n| 10|         2|        2|2023-05-07|   8|\n+---+----------+---------+----------+----+\n\n```\n\n### Cleanup\nYou can easily clean the datalake by using the `cleanup` function\n\n```python\n>>> builder.cleanup()\n>>> spark.sql(\"SHOW DATABASES\").show()\n+---------+\n|namespace|\n+---------+\n|  default|\n+---------+\n```\n\n## Documentation\nYou can check the full documentation to use all features available in `pyspark-data-mocker` [here](https://fedemgp.github.io/)\n\n<!--\n# Restore the previous state\n$ mv tests/data/basic_datalake/school tests/data/basic_datalake/bar\n$ mv tests/data/basic_datalake/grades tests/data/basic_datalake/foo\n-->\n",
    "bugtrack_url": null,
    "license": "GPL-3.0",
    "summary": "Mock a datalake easily to be able to test your pyspark data application",
    "version": "2.0.1",
    "project_urls": {
        "Homepage": "https://fedemgp.github.io",
        "Repository": "https://github.com/fedemgp/pyspark_data_mocker/"
    },
    "split_keywords": [
        "pyspark",
        "tests",
        "data",
        "mocker"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0286a1299cdfc8af45ee3d809da0615e208be16f1a5d764aff11e98faf2bc028",
                "md5": "beb81eb178850577d2bb896279e43817",
                "sha256": "598e58e98971f58c77b0050842bd474af95a75a3d6210430ea5f57c544b54d55"
            },
            "downloads": -1,
            "filename": "pyspark_data_mocker-2.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "beb81eb178850577d2bb896279e43817",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 24150,
            "upload_time": "2023-10-20T14:31:30",
            "upload_time_iso_8601": "2023-10-20T14:31:30.398215Z",
            "url": "https://files.pythonhosted.org/packages/02/86/a1299cdfc8af45ee3d809da0615e208be16f1a5d764aff11e98faf2bc028/pyspark_data_mocker-2.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f930dcdd5f2d130fe339e4dbae31a394f72dea12f869a2391f8f23766ec07aff",
                "md5": "323c6b509a1e0b1d0472804ab00443c3",
                "sha256": "8f2eb6ba21a8445b7b6f4365a50051d89165211aa714ba7a8234d706a91133e7"
            },
            "downloads": -1,
            "filename": "pyspark_data_mocker-2.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "323c6b509a1e0b1d0472804ab00443c3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 22973,
            "upload_time": "2023-10-20T14:31:32",
            "upload_time_iso_8601": "2023-10-20T14:31:32.790735Z",
            "url": "https://files.pythonhosted.org/packages/f9/30/dcdd5f2d130fe339e4dbae31a394f72dea12f869a2391f8f23766ec07aff/pyspark_data_mocker-2.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-20 14:31:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fedemgp",
    "github_project": "pyspark_data_mocker",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "pyspark-data-mocker"
}
        
Elapsed time: 0.12591s