df-to-azure


Namedf-to-azure JSON
Version 0.9.1 PyPI version JSON
download
home_pagehttps://github.com/zypp-io/df_to_azure
SummaryAutomatically write pandas DataFrames to SQL by creating pipelines in Azure Data Factory with copy activity from blob to SQL
upload_time2024-02-15 09:53:49
maintainer
docs_urlNone
authorZypp
requires_python>=3.7
license
keywords python azure data-warehouse sql blob
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img alt="logo" src="https://www.zypp.io/static/assets/img/logos/zypp/white/500px.png"  width="200"/>
</p><br>

[![Downloads](https://pepy.tech/badge/df_to_azure)](https://pepy.tech/project/keyvault)
[![Open Source](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)](https://opensource.org/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![PyPI](https://img.shields.io/pypi/v/df_to_azure)](https://pypi.org/project/df-to-azure/)
[![Latest release](https://badgen.net/github/release/zypp-io/df_to_azure)](https://github.com/zypp-io/df_to_azure/releases)

DF to Azure
===

> Python module for fast upload of pandas DataFrame to Azure SQL Database using automatic created pipelines in Azure Data Factory.

## Introduction

The purpose of this project is to upload large datasets using Azure Data Factory combined with an Azure SQL Server.
In steps the following process kicks off:<p>
    1. The data will be uploaded as a .csv file to Azure Blob storage.<br>
    2. A SQL table is prepared based on [pandas DataFrame types](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#basics-dtypes),
which will be converted to the corresponding [SQLAlchemy types](https://docs.sqlalchemy.org/en/14/core/type_basics.html). <br>
    3. A pipeline is created in datafactory for uploading the .csv from the Blob storage into the SQL table.<br>
    4. The pipeline is triggered, so that the .csv file is bulk inserted into the SQL table.<br>

## How it works

Based on the following attributes, it is possible to bulk insert your dataframe into the SQL Database:

```python
from df_to_azure import df_to_azure

df_to_azure(df=df, tablename="table_name", schema="schema", method="create")
```

1. `df`: dataframe you wish to export
2. `tablename`: desired name of the table
3. `schema`: desired sql schema
4. `method`: option for "create" "append" or "upsert"
5. `id_field`: id field of the table. Necessary if `method` is set to "upsert"

**Important**: the csv's are uploaded to a container called `dftoazure`, so create this in your storage account before using this module.

##### Upsert / create or append
It is possible to upsert the SQL table with (new) records, if present in the dataframe you want to upload.
Based on the id_field, the SQL table is being checked on overlapping values.
If there are new records, the "old" records will be updated in the SQL table.
The new records will be uploaded and appended to the current SQL table.

# Settings
To use this module, you need to add the `azure subscriptions settings` and `azure data factory settings` to your environment variables.
We recommend to work with `.env` files (or even better, automatically load them with [Azure Keyvault](https://pypi.org/project/keyvault/)) and load them in during runtime. But this is optional and they can be set as system variables as well.
Use the following template when using `.env`

## Parquet
Since version 0.6.0, functionality for uploading dataframe to parquet is supported. simply add argument `parquet=True` to upload the dataframe to the Azure storage container parquet.
The arguments tablename and schema will be used to create a folder structure. if parquet is set to True, the dataset will not be uploaded to a SQL database.

```text
# --- ADF SETTINGS ---

# data factory settings
rg_name : ""
rg_location: "westeurope"
df_name : ""

# blob settings
ls_blob_account_name : ""
ls_blob_container_name : ""
ls_blob_account_key : ""

# SQL settings
SQL_SERVER: ""
SQL_DB: ""
SQL_USER: ""
SQL_PW: ""

# --- AZURE SETTINGS ---
# azure credentials for connecting to azure subscription.
client_id : ""
secret : ""
tenant : ""
subscription_id : ""
```

## Maintained by [Zypp](https://github.com/zypp-io):
- [Melvin Folkers](https://github.com/melvinfolkers)
- [Erfan Nariman](https://github.com/erfannariman)

## Support:
For support on using this module, you can reach us at [hello@zypp.io](mailto:hello@zypp.io)

---

## Testing

To run the test suite, use:

```commandline
pytest df_to_azure
```

To run pytest for a single test:
```commandline
pytest df_to_azure/tests/test_df_to_azure.py::test_duplicate_keys_upsert
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zypp-io/df_to_azure",
    "name": "df-to-azure",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "python,azure,data-warehouse,sql,blob",
    "author": "Zypp",
    "author_email": "hello@zypp.io",
    "download_url": "https://files.pythonhosted.org/packages/a6/ec/7592651fc0a8f6dcce46b26b15c0b866d3a2fb079d53648ca07fd9c2a60f/df_to_azure-0.9.1.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img alt=\"logo\" src=\"https://www.zypp.io/static/assets/img/logos/zypp/white/500px.png\"  width=\"200\"/>\n</p><br>\n\n[![Downloads](https://pepy.tech/badge/df_to_azure)](https://pepy.tech/project/keyvault)\n[![Open Source](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)](https://opensource.org/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![PyPI](https://img.shields.io/pypi/v/df_to_azure)](https://pypi.org/project/df-to-azure/)\n[![Latest release](https://badgen.net/github/release/zypp-io/df_to_azure)](https://github.com/zypp-io/df_to_azure/releases)\n\nDF to Azure\n===\n\n> Python module for fast upload of pandas DataFrame to Azure SQL Database using automatic created pipelines in Azure Data Factory.\n\n## Introduction\n\nThe purpose of this project is to upload large datasets using Azure Data Factory combined with an Azure SQL Server.\nIn steps the following process kicks off:<p>\n    1. The data will be uploaded as a .csv file to Azure Blob storage.<br>\n    2. A SQL table is prepared based on [pandas DataFrame types](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#basics-dtypes),\nwhich will be converted to the corresponding [SQLAlchemy types](https://docs.sqlalchemy.org/en/14/core/type_basics.html). <br>\n    3. A pipeline is created in datafactory for uploading the .csv from the Blob storage into the SQL table.<br>\n    4. The pipeline is triggered, so that the .csv file is bulk inserted into the SQL table.<br>\n\n## How it works\n\nBased on the following attributes, it is possible to bulk insert your dataframe into the SQL Database:\n\n```python\nfrom df_to_azure import df_to_azure\n\ndf_to_azure(df=df, tablename=\"table_name\", schema=\"schema\", method=\"create\")\n```\n\n1. `df`: dataframe you wish to export\n2. `tablename`: desired name of the table\n3. `schema`: desired sql schema\n4. `method`: option for \"create\" \"append\" or \"upsert\"\n5. `id_field`: id field of the table. Necessary if `method` is set to \"upsert\"\n\n**Important**: the csv's are uploaded to a container called `dftoazure`, so create this in your storage account before using this module.\n\n##### Upsert / create or append\nIt is possible to upsert the SQL table with (new) records, if present in the dataframe you want to upload.\nBased on the id_field, the SQL table is being checked on overlapping values.\nIf there are new records, the \"old\" records will be updated in the SQL table.\nThe new records will be uploaded and appended to the current SQL table.\n\n# Settings\nTo use this module, you need to add the `azure subscriptions settings` and `azure data factory settings` to your environment variables.\nWe recommend to work with `.env` files (or even better, automatically load them with [Azure Keyvault](https://pypi.org/project/keyvault/)) and load them in during runtime. But this is optional and they can be set as system variables as well.\nUse the following template when using `.env`\n\n## Parquet\nSince version 0.6.0, functionality for uploading dataframe to parquet is supported. simply add argument `parquet=True` to upload the dataframe to the Azure storage container parquet.\nThe arguments tablename and schema will be used to create a folder structure. if parquet is set to True, the dataset will not be uploaded to a SQL database.\n\n```text\n# --- ADF SETTINGS ---\n\n# data factory settings\nrg_name : \"\"\nrg_location: \"westeurope\"\ndf_name : \"\"\n\n# blob settings\nls_blob_account_name : \"\"\nls_blob_container_name : \"\"\nls_blob_account_key : \"\"\n\n# SQL settings\nSQL_SERVER: \"\"\nSQL_DB: \"\"\nSQL_USER: \"\"\nSQL_PW: \"\"\n\n# --- AZURE SETTINGS ---\n# azure credentials for connecting to azure subscription.\nclient_id : \"\"\nsecret : \"\"\ntenant : \"\"\nsubscription_id : \"\"\n```\n\n## Maintained by [Zypp](https://github.com/zypp-io):\n- [Melvin Folkers](https://github.com/melvinfolkers)\n- [Erfan Nariman](https://github.com/erfannariman)\n\n## Support:\nFor support on using this module, you can reach us at [hello@zypp.io](mailto:hello@zypp.io)\n\n---\n\n## Testing\n\nTo run the test suite, use:\n\n```commandline\npytest df_to_azure\n```\n\nTo run pytest for a single test:\n```commandline\npytest df_to_azure/tests/test_df_to_azure.py::test_duplicate_keys_upsert\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Automatically write pandas DataFrames to SQL by creating pipelines in Azure Data Factory with copy activity from blob to SQL",
    "version": "0.9.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/zypp-io/df_to_azure/issues",
        "Homepage": "https://github.com/zypp-io/df_to_azure",
        "Source": "https://github.com/zypp-io/df_to_azure"
    },
    "split_keywords": [
        "python",
        "azure",
        "data-warehouse",
        "sql",
        "blob"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "33d16b3c026f480d00d222a6144991d663d4bc156d37bbd87fc3cb62037a57e6",
                "md5": "f9497f8c00019efe03f341c998fbf314",
                "sha256": "6303dd92d2c9d88dff12854beb4252184cbfbf8a118be4ac3bf0a921d0cd24b9"
            },
            "downloads": -1,
            "filename": "df_to_azure-0.9.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f9497f8c00019efe03f341c998fbf314",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 18446,
            "upload_time": "2024-02-15T09:53:47",
            "upload_time_iso_8601": "2024-02-15T09:53:47.631142Z",
            "url": "https://files.pythonhosted.org/packages/33/d1/6b3c026f480d00d222a6144991d663d4bc156d37bbd87fc3cb62037a57e6/df_to_azure-0.9.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a6ec7592651fc0a8f6dcce46b26b15c0b866d3a2fb079d53648ca07fd9c2a60f",
                "md5": "31c6d34a5339a440181706a80bec41df",
                "sha256": "65155b096b6b673907c3d1052306fdad20f028dccbbd21d05385f2d979f5f64a"
            },
            "downloads": -1,
            "filename": "df_to_azure-0.9.1.tar.gz",
            "has_sig": false,
            "md5_digest": "31c6d34a5339a440181706a80bec41df",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 18266,
            "upload_time": "2024-02-15T09:53:49",
            "upload_time_iso_8601": "2024-02-15T09:53:49.014887Z",
            "url": "https://files.pythonhosted.org/packages/a6/ec/7592651fc0a8f6dcce46b26b15c0b866d3a2fb079d53648ca07fd9c2a60f/df_to_azure-0.9.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-15 09:53:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zypp-io",
    "github_project": "df_to_azure",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "df-to-azure"
}
        
Elapsed time: 0.18892s