<p align="center">
<img alt="logo" src="https://www.zypp.io/static/assets/img/logos/zypp/white/500px.png" width="200"/>
</p><br>
[![Downloads](https://pepy.tech/badge/df_to_azure)](https://pepy.tech/project/keyvault)
[![Open Source](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)](https://opensource.org/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![PyPI](https://img.shields.io/pypi/v/df_to_azure)](https://pypi.org/project/df-to-azure/)
[![Latest release](https://badgen.net/github/release/zypp-io/df_to_azure)](https://github.com/zypp-io/df_to_azure/releases)
DF to Azure
===
> Python module for fast upload of pandas DataFrame to Azure SQL Database using automatic created pipelines in Azure Data Factory.
## Introduction
The purpose of this project is to upload large datasets using Azure Data Factory combined with an Azure SQL Server.
In steps the following process kicks off:<p>
1. The data will be uploaded as a .csv file to Azure Blob storage.<br>
2. A SQL table is prepared based on [pandas DataFrame types](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#basics-dtypes),
which will be converted to the corresponding [SQLAlchemy types](https://docs.sqlalchemy.org/en/14/core/type_basics.html). <br>
3. A pipeline is created in datafactory for uploading the .csv from the Blob storage into the SQL table.<br>
4. The pipeline is triggered, so that the .csv file is bulk inserted into the SQL table.<br>
## How it works
Based on the following attributes, it is possible to bulk insert your dataframe into the SQL Database:
```python
from df_to_azure import df_to_azure
df_to_azure(df=df, tablename="table_name", schema="schema", method="create")
```
1. `df`: dataframe you wish to export
2. `tablename`: desired name of the table
3. `schema`: desired sql schema
4. `method`: option for "create" "append" or "upsert"
5. `id_field`: id field of the table. Necessary if `method` is set to "upsert"
**Important**: the csv's are uploaded to a container called `dftoazure`, so create this in your storage account before using this module.
##### Upsert / create or append
It is possible to upsert the SQL table with (new) records, if present in the dataframe you want to upload.
Based on the id_field, the SQL table is being checked on overlapping values.
If there are new records, the "old" records will be updated in the SQL table.
The new records will be uploaded and appended to the current SQL table.
# Settings
To use this module, you need to add the `azure subscriptions settings` and `azure data factory settings` to your environment variables.
We recommend to work with `.env` files (or even better, automatically load them with [Azure Keyvault](https://pypi.org/project/keyvault/)) and load them in during runtime. But this is optional and they can be set as system variables as well.
Use the following template when using `.env`
## Parquet
Since version 0.6.0, functionality for uploading dataframe to parquet is supported. simply add argument `parquet=True` to upload the dataframe to the Azure storage container parquet.
The arguments tablename and schema will be used to create a folder structure. if parquet is set to True, the dataset will not be uploaded to a SQL database.
```text
# --- ADF SETTINGS ---
# data factory settings
rg_name : ""
rg_location: "westeurope"
df_name : ""
# blob settings
ls_blob_account_name : ""
ls_blob_container_name : ""
ls_blob_account_key : ""
# SQL settings
SQL_SERVER: ""
SQL_DB: ""
SQL_USER: ""
SQL_PW: ""
# --- AZURE SETTINGS ---
# azure credentials for connecting to azure subscription.
client_id : ""
secret : ""
tenant : ""
subscription_id : ""
```
## Maintained by [Zypp](https://github.com/zypp-io):
- [Melvin Folkers](https://github.com/melvinfolkers)
- [Erfan Nariman](https://github.com/erfannariman)
## Support:
For support on using this module, you can reach us at [hello@zypp.io](mailto:hello@zypp.io)
---
## Testing
To run the test suite, use:
```commandline
pytest df_to_azure
```
To run pytest for a single test:
```commandline
pytest df_to_azure/tests/test_df_to_azure.py::test_duplicate_keys_upsert
```
Raw data
{
"_id": null,
"home_page": "https://github.com/zypp-io/df_to_azure",
"name": "df-to-azure",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "python, azure, data-warehouse, sql, blob",
"author": "Zypp",
"author_email": "hello@zypp.io",
"download_url": "https://files.pythonhosted.org/packages/92/2c/20b2501fa2aa31ae5ecce56a2a30d1aa44037b6a01ae0ac9719d31f21b35/df_to_azure-1.0.1.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img alt=\"logo\" src=\"https://www.zypp.io/static/assets/img/logos/zypp/white/500px.png\" width=\"200\"/>\n</p><br>\n\n[![Downloads](https://pepy.tech/badge/df_to_azure)](https://pepy.tech/project/keyvault)\n[![Open Source](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)](https://opensource.org/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![PyPI](https://img.shields.io/pypi/v/df_to_azure)](https://pypi.org/project/df-to-azure/)\n[![Latest release](https://badgen.net/github/release/zypp-io/df_to_azure)](https://github.com/zypp-io/df_to_azure/releases)\n\nDF to Azure\n===\n\n> Python module for fast upload of pandas DataFrame to Azure SQL Database using automatic created pipelines in Azure Data Factory.\n\n## Introduction\n\nThe purpose of this project is to upload large datasets using Azure Data Factory combined with an Azure SQL Server.\nIn steps the following process kicks off:<p>\n 1. The data will be uploaded as a .csv file to Azure Blob storage.<br>\n 2. A SQL table is prepared based on [pandas DataFrame types](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#basics-dtypes),\nwhich will be converted to the corresponding [SQLAlchemy types](https://docs.sqlalchemy.org/en/14/core/type_basics.html). <br>\n 3. A pipeline is created in datafactory for uploading the .csv from the Blob storage into the SQL table.<br>\n 4. The pipeline is triggered, so that the .csv file is bulk inserted into the SQL table.<br>\n\n## How it works\n\nBased on the following attributes, it is possible to bulk insert your dataframe into the SQL Database:\n\n```python\nfrom df_to_azure import df_to_azure\n\ndf_to_azure(df=df, tablename=\"table_name\", schema=\"schema\", method=\"create\")\n```\n\n1. `df`: dataframe you wish to export\n2. `tablename`: desired name of the table\n3. `schema`: desired sql schema\n4. `method`: option for \"create\" \"append\" or \"upsert\"\n5. `id_field`: id field of the table. Necessary if `method` is set to \"upsert\"\n\n**Important**: the csv's are uploaded to a container called `dftoazure`, so create this in your storage account before using this module.\n\n##### Upsert / create or append\nIt is possible to upsert the SQL table with (new) records, if present in the dataframe you want to upload.\nBased on the id_field, the SQL table is being checked on overlapping values.\nIf there are new records, the \"old\" records will be updated in the SQL table.\nThe new records will be uploaded and appended to the current SQL table.\n\n# Settings\nTo use this module, you need to add the `azure subscriptions settings` and `azure data factory settings` to your environment variables.\nWe recommend to work with `.env` files (or even better, automatically load them with [Azure Keyvault](https://pypi.org/project/keyvault/)) and load them in during runtime. But this is optional and they can be set as system variables as well.\nUse the following template when using `.env`\n\n## Parquet\nSince version 0.6.0, functionality for uploading dataframe to parquet is supported. simply add argument `parquet=True` to upload the dataframe to the Azure storage container parquet.\nThe arguments tablename and schema will be used to create a folder structure. if parquet is set to True, the dataset will not be uploaded to a SQL database.\n\n```text\n# --- ADF SETTINGS ---\n\n# data factory settings\nrg_name : \"\"\nrg_location: \"westeurope\"\ndf_name : \"\"\n\n# blob settings\nls_blob_account_name : \"\"\nls_blob_container_name : \"\"\nls_blob_account_key : \"\"\n\n# SQL settings\nSQL_SERVER: \"\"\nSQL_DB: \"\"\nSQL_USER: \"\"\nSQL_PW: \"\"\n\n# --- AZURE SETTINGS ---\n# azure credentials for connecting to azure subscription.\nclient_id : \"\"\nsecret : \"\"\ntenant : \"\"\nsubscription_id : \"\"\n```\n\n## Maintained by [Zypp](https://github.com/zypp-io):\n- [Melvin Folkers](https://github.com/melvinfolkers)\n- [Erfan Nariman](https://github.com/erfannariman)\n\n## Support:\nFor support on using this module, you can reach us at [hello@zypp.io](mailto:hello@zypp.io)\n\n---\n\n## Testing\n\nTo run the test suite, use:\n\n```commandline\npytest df_to_azure\n```\n\nTo run pytest for a single test:\n```commandline\npytest df_to_azure/tests/test_df_to_azure.py::test_duplicate_keys_upsert\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Automatically write pandas DataFrames to SQL by creating pipelines in Azure Data Factory with copy activity from blob to SQL",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/zypp-io/df_to_azure/issues",
"Homepage": "https://github.com/zypp-io/df_to_azure",
"Source": "https://github.com/zypp-io/df_to_azure"
},
"split_keywords": [
"python",
" azure",
" data-warehouse",
" sql",
" blob"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "f8a080c1bd55cf61dac66f52824b7f9aaec2456fff17648acb1973f9ce6dfa71",
"md5": "1689b9fb868f53e8af92e15a390a2b1f",
"sha256": "af0dfc1fd5cca2125419e23e480fc8d1f9f510882f2ebdb4b71e80533457694a"
},
"downloads": -1,
"filename": "df_to_azure-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1689b9fb868f53e8af92e15a390a2b1f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 18460,
"upload_time": "2024-09-06T12:13:21",
"upload_time_iso_8601": "2024-09-06T12:13:21.581776Z",
"url": "https://files.pythonhosted.org/packages/f8/a0/80c1bd55cf61dac66f52824b7f9aaec2456fff17648acb1973f9ce6dfa71/df_to_azure-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "922c20b2501fa2aa31ae5ecce56a2a30d1aa44037b6a01ae0ac9719d31f21b35",
"md5": "d3636b4af7e0d18ee07215856f3afbb7",
"sha256": "5acf9da3c740054d166818c990754d8f78ecd709e424775bc5b7829ffc24b308"
},
"downloads": -1,
"filename": "df_to_azure-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "d3636b4af7e0d18ee07215856f3afbb7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 18254,
"upload_time": "2024-09-06T12:13:23",
"upload_time_iso_8601": "2024-09-06T12:13:23.252580Z",
"url": "https://files.pythonhosted.org/packages/92/2c/20b2501fa2aa31ae5ecce56a2a30d1aa44037b6a01ae0ac9719d31f21b35/df_to_azure-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-06 12:13:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "zypp-io",
"github_project": "df_to_azure",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "df-to-azure"
}