pysparkify


Namepysparkify JSON
Version 0.27.0 PyPI version JSON
download
home_pagehttps://github.com/raohammad/pysparkify
SummarySpark based ETL
upload_time2024-05-14 14:22:41
maintainerNone
docs_urlNone
authorHammad Aslam KHAN
requires_pythonNone
licenseMIT
keywords python pysparkify etl bigdata
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Introduction
The pysparkify library facilitates data processing from diverse sources, applying transformations, and writing outcomes to various destinations. It employs the pipeline design pattern, offering a flexible and modular approach to big data data processing.

Supported sources and sinks include:

- Amazon S3
- Amazon Redshift
- Postgres DB
- Local Files (for local spark)

## Setup

Install this package using:

```bash
pip install pysparkify
```

Create a spark_config.conf file following this format to input all Spark-related configurations. For instance:

```bash
[SPARK]
spark.master=local[*]
spark.app.name=PysparkifyApp
spark.executor.memory=4g
spark.driver.memory=2g
```

The library abstracts Spark data processing workflows. For example, you can:

- Extract the first two rows of data and save them as a separate output.
- Compute an average and save it as another output.

Here's a sample data set:

```
name,age,city
Hayaan,10,Islamanad
Jibraan,8,ShahAlam
Allyan,3,Paris
John,35,San Francisco
Doe,22,Houston
Dane,30,Seattle
```

Your recipe reads the CSV data as the source, transforms it, and optionally saves the output of each transformation to a sink. Below is a sample recipe.yml for this operation:

```
source:
  - type: CsvSource
    config:
      name: csv
      path: "resources/data/input_data.csv"

transformer:
  - type: SQLTransformer
    config:
      name: transformer1
      source: 
        - name: csv
          as_name: t1
      statement: 
        - sql: "SELECT * from t1 limit 2"
          as_name: trx1
          to_sink: sink1
        - sql: "select AVG(age) from trx1"
          as_name: trx2
          to_sink: sink2

sink:
  - type: CsvSink
    config:
      name: sink1
      path: "output/output_data.csv"
  - type: CsvSink
    config:
      name: sink2
      path: "output/avgage_data.csv"
      
```


### Usage

You can run this library as a command-line tool:

```bash
pysparkify 'path/to/recipe.yml' --spark-config 'path/to/spark-config.conf'
```

Or use it in your Python scripts:

```python

import pysparkify
pysparkify.run('path/to/recipe.yml') #this expects spark_config.conf file on path `config/spark_config.conf` path

# or with optional spark configuration file
pysparkify.run('path/to/recipe.yml', 'path/to/custom_spark_config.conf')

```

## Design

The package is structured as follows:

### Source, Sink and Transformer Abstraction

The package defines abstract classes `Source`, `Sink` and `Transformer` to represent data sources, sinks and transformers. It also provides concrete classes, including `CsvSource`, `CsvSink` and `SQLTransformer`, which inherit from the abstract classes. This design allows you to add new source and sink types with ease.

### Configuration via `recipe.yml`

The package reads its configuration from a `recipe.yml` file. This YAML file specifies the source, sink, and transformation configurations. It allows you to define different data sources, sinks, and transformation queries.

### Transformation Queries

Transformations are performed by `SQLTransformer` using Spark SQL queries defined in the configuration. These queries are executed on the data from the source before writing it to the sink. New transformers can be implemented by extending `Transformer` abstract class that can take spark dataframes from sources to process and send dataframes to sinks to save.

### Pipeline Execution

The package reads data from the specified source, performs transformations based on the configured SQL queries, and then writes the results to the specified sink. You can configure multiple sources and sinks within the same package.


## How to Contribute

1. There are plenty of ways, in which implementation of new Sources and Sinks top the list
2. Open a PR
3. PR is reviewed and approved, included `github actions` will deploy the version directly to pypi repository


## Sponsors

The project is being sponsord by [Dataflick](https://www.dataflick.dev).
If you are considering using this library in your projects, [please consider becoming a sponsor for continued support](mailto:raohammad@gmail.com?subject=Become%20A%20Sponsor).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/raohammad/pysparkify",
    "name": "pysparkify",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "python, pysparkify, etl, bigdata",
    "author": "Hammad Aslam KHAN",
    "author_email": "raohammad@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/00/e7/b532480ac880f9f4cdc4aacf1bdee5d748b4e6f0ce99d33022e40ba48a33/pysparkify-0.27.0.tar.gz",
    "platform": null,
    "description": "# Introduction\nThe pysparkify library facilitates data processing from diverse sources, applying transformations, and writing outcomes to various destinations. It employs the pipeline design pattern, offering a flexible and modular approach to big data data processing.\n\nSupported sources and sinks include:\n\n- Amazon S3\n- Amazon Redshift\n- Postgres DB\n- Local Files (for local spark)\n\n## Setup\n\nInstall this package using:\n\n```bash\npip install pysparkify\n```\n\nCreate a spark_config.conf file following this format to input all Spark-related configurations. For instance:\n\n```bash\n[SPARK]\nspark.master=local[*]\nspark.app.name=PysparkifyApp\nspark.executor.memory=4g\nspark.driver.memory=2g\n```\n\nThe library abstracts Spark data processing workflows. For example, you can:\n\n- Extract the first two rows of data and save them as a separate output.\n- Compute an average and save it as another output.\n\nHere's a sample data set:\n\n```\nname,age,city\nHayaan,10,Islamanad\nJibraan,8,ShahAlam\nAllyan,3,Paris\nJohn,35,San Francisco\nDoe,22,Houston\nDane,30,Seattle\n```\n\nYour recipe reads the CSV data as the source, transforms it, and optionally saves the output of each transformation to a sink. Below is a sample recipe.yml for this operation:\n\n```\nsource:\n  - type: CsvSource\n    config:\n      name: csv\n      path: \"resources/data/input_data.csv\"\n\ntransformer:\n  - type: SQLTransformer\n    config:\n      name: transformer1\n      source: \n        - name: csv\n          as_name: t1\n      statement: \n        - sql: \"SELECT * from t1 limit 2\"\n          as_name: trx1\n          to_sink: sink1\n        - sql: \"select AVG(age) from trx1\"\n          as_name: trx2\n          to_sink: sink2\n\nsink:\n  - type: CsvSink\n    config:\n      name: sink1\n      path: \"output/output_data.csv\"\n  - type: CsvSink\n    config:\n      name: sink2\n      path: \"output/avgage_data.csv\"\n      \n```\n\n\n### Usage\n\nYou can run this library as a command-line tool:\n\n```bash\npysparkify 'path/to/recipe.yml' --spark-config 'path/to/spark-config.conf'\n```\n\nOr use it in your Python scripts:\n\n```python\n\nimport pysparkify\npysparkify.run('path/to/recipe.yml') #this expects spark_config.conf file on path `config/spark_config.conf` path\n\n# or with optional spark configuration file\npysparkify.run('path/to/recipe.yml', 'path/to/custom_spark_config.conf')\n\n```\n\n## Design\n\nThe package is structured as follows:\n\n### Source, Sink and Transformer Abstraction\n\nThe package defines abstract classes `Source`, `Sink` and `Transformer` to represent data sources, sinks and transformers. It also provides concrete classes, including `CsvSource`, `CsvSink` and `SQLTransformer`, which inherit from the abstract classes. This design allows you to add new source and sink types with ease.\n\n### Configuration via `recipe.yml`\n\nThe package reads its configuration from a `recipe.yml` file. This YAML file specifies the source, sink, and transformation configurations. It allows you to define different data sources, sinks, and transformation queries.\n\n### Transformation Queries\n\nTransformations are performed by `SQLTransformer` using Spark SQL queries defined in the configuration. These queries are executed on the data from the source before writing it to the sink. New transformers can be implemented by extending `Transformer` abstract class that can take spark dataframes from sources to process and send dataframes to sinks to save.\n\n### Pipeline Execution\n\nThe package reads data from the specified source, performs transformations based on the configured SQL queries, and then writes the results to the specified sink. You can configure multiple sources and sinks within the same package.\n\n\n## How to Contribute\n\n1. There are plenty of ways, in which implementation of new Sources and Sinks top the list\n2. Open a PR\n3. PR is reviewed and approved, included `github actions` will deploy the version directly to pypi repository\n\n\n## Sponsors\n\nThe project is being sponsord by [Dataflick](https://www.dataflick.dev).\nIf you are considering using this library in your projects, [please consider becoming a sponsor for continued support](mailto:raohammad@gmail.com?subject=Become%20A%20Sponsor).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Spark based ETL",
    "version": "0.27.0",
    "project_urls": {
        "Homepage": "https://github.com/raohammad/pysparkify"
    },
    "split_keywords": [
        "python",
        " pysparkify",
        " etl",
        " bigdata"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "adf5f2458d9df0c514c3b2396ad08a67f9220236d455a4c3683a8ce2f3eaaae0",
                "md5": "28028708550de7d4c5ea58f8b0fed2e6",
                "sha256": "066097c31eda52c5dea74bab7ab8494b6ae9601634810ca77e98829bff52e57b"
            },
            "downloads": -1,
            "filename": "pysparkify-0.27.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "28028708550de7d4c5ea58f8b0fed2e6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 15326,
            "upload_time": "2024-05-14T14:22:40",
            "upload_time_iso_8601": "2024-05-14T14:22:40.192437Z",
            "url": "https://files.pythonhosted.org/packages/ad/f5/f2458d9df0c514c3b2396ad08a67f9220236d455a4c3683a8ce2f3eaaae0/pysparkify-0.27.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "00e7b532480ac880f9f4cdc4aacf1bdee5d748b4e6f0ce99d33022e40ba48a33",
                "md5": "0f1372e5b94307a1714f80e267aee1b9",
                "sha256": "a09d0f938973047e9f52514509bef1c2547cb04493b7f6609a27a38e764bca99"
            },
            "downloads": -1,
            "filename": "pysparkify-0.27.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0f1372e5b94307a1714f80e267aee1b9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 10639,
            "upload_time": "2024-05-14T14:22:41",
            "upload_time_iso_8601": "2024-05-14T14:22:41.479232Z",
            "url": "https://files.pythonhosted.org/packages/00/e7/b532480ac880f9f4cdc4aacf1bdee5d748b4e6f0ce99d33022e40ba48a33/pysparkify-0.27.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-14 14:22:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "raohammad",
    "github_project": "pysparkify",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "pysparkify"
}
        
Elapsed time: 2.33638s