# fast_to_sql
[![Test and lint](https://github.com/jdglaser/fast-to-sql/actions/workflows/test-and-lint.yml/badge.svg)](https://github.com/jdglaser/fast-to-sql/actions/workflows/test-and-lint.yml)
![pypi](https://img.shields.io/pypi/v/fast-to-sql.svg)
![Python version](https://img.shields.io/pypi/pyversions/fast-to-sql)
![PyPI - Downloads](https://img.shields.io/pypi/dm/fast-to-sql)
![PyPI - License](https://img.shields.io/pypi/l/fast-to-sql)
## Introduction
`fast_to_sql` is an improved way to upload pandas dataframes to Microsoft SQL Server.
`fast_to_sql` takes advantage of pyodbc rather than SQLAlchemy. This allows for a much lighter weight import for writing pandas dataframes to sql server. It uses pyodbc's `executemany` method with `fast_executemany` set to `True`, resulting in far superior run times when inserting data.
## Installation
```python
pip install fast_to_sql
```
## Requirements
* Written for Python 3.8+
* Requires pandas, pyodbc
## Example
```py
from datetime import datetime
import pandas as pd
import pyodbc
from fast_to_sql import fast_to_sql
# Test Dataframe for insertion
df = pd.DataFrame({
"Col1": [1, 2, 3],
"Col2": ["A", "B", "C"],
"Col3": [True, False, True],
"Col4": [datetime(2020,1,1),datetime(2020,1,2),datetime(2020,1,3)]
})
# Create a pyodbc connection
conn = pyodbc.connect(
"""
Driver={ODBC Driver 17 for SQL Server};
Server=localhost;
Database=my_database;
UID=my_user;
PWD=my_pass;
"""
)
# If a table is created, the generated sql is returned
create_statement = fast_to_sql(
df, "my_great_table", conn, if_exists="replace", custom={"Col1":"INT PRIMARY KEY"}
)
# Commit upload actions and close connection
conn.commit()
conn.close()
```
## Usage
### Main function
```python
fast_to_sql(
df,
name,
conn,
if_exists="append",
custom=None,
temp=False,
copy=False,
clean_cols=True
)
```
* ```df```: pandas DataFrame to upload
* ```name```: String of desired name for the table in SQL server
* ```conn```: A valid pyodbc connection object
* ```if_exists```: Option for what to do if the specified table name already exists in the database. If the table does not exist a new one will be created. By default this option is set to 'append'
* __'append'__: Appends the dataframe to the table if it already exists in SQL server.
* __'fail'__: Purposely raises a `FailError` if the table already exists in SQL server.
* __'replace'__: Drops the old table with the specified name, and creates a new one. **Be careful with this option**, it will completely delete a table with the specified name in SQL server.
* ```custom```: A dictionary object with one or more of the column names being uploaded as the key, and a valid SQL column definition as the value. The value must contain a type (`INT`, `FLOAT`, `VARCHAR(500)`, etc.), and can optionally also include constraints (`NOT NULL`, `PRIMARY KEY`, etc.)
* Examples:
`{'ColumnName':'varchar(1000)'}`
`{'ColumnName2':'int primary key'}`
* ```temp```: Either `True` if creating a local sql server temporary table for the connection, or `False` (default) if not.
* ```copy```: Defaults to `False`. If set to `True`, a copy of the dataframe will be made so column names of the original dataframe are not altered. Use this if you plan to continue to use the dataframe in your script after running `fast_to_sql`.
* ```clean_cols```: Defaults to `True`. If set to `False`, column names will not be cleaned when creating the table to insert the DataFrame into. If this is set to `False`, it is up to the caller of the function to make sure the names of the columns in the DataFrame are compatible with SQL Server.
Raw data
{
"_id": null,
"home_page": "https://github.com/jdglaser/fast-to-sql",
"name": "fast-to-sql",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "pandas,to_sql,fast,sql",
"author": "Jarred Glaser",
"author_email": "jarred.glaser@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/e6/75/ec246bce62becef75b69e421ee58fa6b9c9c75028bb6bfe3d7b9ccc6ac33/fast-to-sql-2.3.0.tar.gz",
"platform": null,
"description": "# fast_to_sql\n\n[![Test and lint](https://github.com/jdglaser/fast-to-sql/actions/workflows/test-and-lint.yml/badge.svg)](https://github.com/jdglaser/fast-to-sql/actions/workflows/test-and-lint.yml)\n![pypi](https://img.shields.io/pypi/v/fast-to-sql.svg)\n![Python version](https://img.shields.io/pypi/pyversions/fast-to-sql)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/fast-to-sql)\n![PyPI - License](https://img.shields.io/pypi/l/fast-to-sql)\n\n\n## Introduction\n\n`fast_to_sql` is an improved way to upload pandas dataframes to Microsoft SQL Server.\n\n`fast_to_sql` takes advantage of pyodbc rather than SQLAlchemy. This allows for a much lighter weight import for writing pandas dataframes to sql server. It uses pyodbc's `executemany` method with `fast_executemany` set to `True`, resulting in far superior run times when inserting data. \n\n## Installation\n\n```python\npip install fast_to_sql\n```\n\n## Requirements\n\n* Written for Python 3.8+\n* Requires pandas, pyodbc\n\n## Example\n\n```py\nfrom datetime import datetime\n\nimport pandas as pd\n\nimport pyodbc\nfrom fast_to_sql import fast_to_sql\n\n# Test Dataframe for insertion\ndf = pd.DataFrame({\n \"Col1\": [1, 2, 3],\n \"Col2\": [\"A\", \"B\", \"C\"],\n \"Col3\": [True, False, True],\n \"Col4\": [datetime(2020,1,1),datetime(2020,1,2),datetime(2020,1,3)]\n})\n\n# Create a pyodbc connection\nconn = pyodbc.connect(\n \"\"\"\n Driver={ODBC Driver 17 for SQL Server};\n Server=localhost;\n Database=my_database;\n UID=my_user;\n PWD=my_pass;\n \"\"\"\n)\n\n# If a table is created, the generated sql is returned\ncreate_statement = fast_to_sql(\n df, \"my_great_table\", conn, if_exists=\"replace\", custom={\"Col1\":\"INT PRIMARY KEY\"}\n)\n\n# Commit upload actions and close connection\nconn.commit()\nconn.close()\n```\n\n## Usage\n\n### Main function\n\n```python\nfast_to_sql(\n df, \n name, \n conn, \n if_exists=\"append\", \n custom=None, \n temp=False, \n copy=False,\n clean_cols=True\n)\n```\n\n* ```df```: pandas DataFrame to upload\n* ```name```: String of desired name for the table in SQL server\n* ```conn```: A valid pyodbc connection object\n* ```if_exists```: Option for what to do if the specified table name already exists in the database. If the table does not exist a new one will be created. By default this option is set to 'append'\n * __'append'__: Appends the dataframe to the table if it already exists in SQL server.\n * __'fail'__: Purposely raises a `FailError` if the table already exists in SQL server.\n * __'replace'__: Drops the old table with the specified name, and creates a new one. **Be careful with this option**, it will completely delete a table with the specified name in SQL server.\n* ```custom```: A dictionary object with one or more of the column names being uploaded as the key, and a valid SQL column definition as the value. The value must contain a type (`INT`, `FLOAT`, `VARCHAR(500)`, etc.), and can optionally also include constraints (`NOT NULL`, `PRIMARY KEY`, etc.)\n * Examples: \n `{'ColumnName':'varchar(1000)'}` \n `{'ColumnName2':'int primary key'}`\n* ```temp```: Either `True` if creating a local sql server temporary table for the connection, or `False` (default) if not.\n* ```copy```: Defaults to `False`. If set to `True`, a copy of the dataframe will be made so column names of the original dataframe are not altered. Use this if you plan to continue to use the dataframe in your script after running `fast_to_sql`.\n* ```clean_cols```: Defaults to `True`. If set to `False`, column names will not be cleaned when creating the table to insert the DataFrame into. If this is set to `False`, it is up to the caller of the function to make sure the names of the columns in the DataFrame are compatible with SQL Server.\n\n\n\n\n\n\n\n\n\n\n\n",
"bugtrack_url": null,
"license": "License :: OSI Approved :: MIT License",
"summary": "An improved way to upload pandas dataframes to Microsoft SQL Server.",
"version": "2.3.0",
"project_urls": {
"Homepage": "https://github.com/jdglaser/fast-to-sql"
},
"split_keywords": [
"pandas",
"to_sql",
"fast",
"sql"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "371a0e9a74f848fb3fbf2c840f69eeb22c281d97a398cd4a10af3adfadc9a6d4",
"md5": "37dc3956056e90d6e25b07cc6f54c118",
"sha256": "93d346ff2379960e0dcd300e2ae6f938e7153b976e3befb227bc172dfe790138"
},
"downloads": -1,
"filename": "fast_to_sql-2.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "37dc3956056e90d6e25b07cc6f54c118",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 7245,
"upload_time": "2023-12-30T20:07:01",
"upload_time_iso_8601": "2023-12-30T20:07:01.453500Z",
"url": "https://files.pythonhosted.org/packages/37/1a/0e9a74f848fb3fbf2c840f69eeb22c281d97a398cd4a10af3adfadc9a6d4/fast_to_sql-2.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e675ec246bce62becef75b69e421ee58fa6b9c9c75028bb6bfe3d7b9ccc6ac33",
"md5": "12106b24a05aae9c886840a2e8ffd404",
"sha256": "854cebe03fc20e1b51e700118e9c88189435cd3f088a81bf370bc4ae89f4a210"
},
"downloads": -1,
"filename": "fast-to-sql-2.3.0.tar.gz",
"has_sig": false,
"md5_digest": "12106b24a05aae9c886840a2e8ffd404",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 8551,
"upload_time": "2023-12-30T20:07:03",
"upload_time_iso_8601": "2023-12-30T20:07:03.401304Z",
"url": "https://files.pythonhosted.org/packages/e6/75/ec246bce62becef75b69e421ee58fa6b9c9c75028bb6bfe3d7b9ccc6ac33/fast-to-sql-2.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-30 20:07:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jdglaser",
"github_project": "fast-to-sql",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "fast-to-sql"
}