# Snowflake SQLAlchemy
[![Build and Test](https://github.com/snowflakedb/snowflake-sqlalchemy/actions/workflows/build_test.yml/badge.svg)](https://github.com/snowflakedb/snowflake-sqlalchemy/actions/workflows/build_test.yml)
[![codecov](https://codecov.io/gh/snowflakedb/snowflake-sqlalchemy/branch/main/graph/badge.svg)](https://codecov.io/gh/snowflakedb/snowflake-sqlalchemy)
[![PyPi](https://img.shields.io/pypi/v/snowflake-sqlalchemy.svg)](https://pypi.python.org/pypi/snowflake-sqlalchemy/)
[![License Apache-2.0](https://img.shields.io/:license-Apache%202-brightgreen.svg)](http://www.apache.org/licenses/LICENSE-2.0.txt)
[![Codestyle Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
Snowflake SQLAlchemy runs on the top of the Snowflake Connector for Python as a [dialect](http://docs.sqlalchemy.org/en/latest/dialects/) to bridge a Snowflake database and SQLAlchemy applications.
| :exclamation: | For production-affecting or urgent issues related to the connector, please [create a case with Snowflake Support](https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge). |
|---------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
## Prerequisites
### Snowflake Connector for Python
The only requirement for Snowflake SQLAlchemy is the Snowflake Connector for Python; however, the connector does not need to be installed because installing Snowflake SQLAlchemy automatically installs the connector.
### Data Analytics and Web Application Frameworks (Optional)
Snowflake SQLAlchemy can be used with [Pandas](http://pandas.pydata.org/), [Jupyter](http://jupyter.org/) and [Pyramid](http://www.pylonsproject.org/), which provide higher levels of application frameworks for data analytics and web applications. However, building a working environment from scratch is not a trivial task, particularly for novice users. Installing the frameworks requires C compilers and tools, and choosing the right tools and versions is a hurdle that might deter users from using Python applications.
An easier way to build an environment is through [Anaconda](https://www.continuum.io/why-anaconda), which provides a complete, precompiled technology stack for all users, including non-Python experts such as data analysts and students. For Anaconda installation instructions, see the [Anaconda install documentation](https://docs.continuum.io/anaconda/install). The Snowflake SQLAlchemy package can then be installed on top of Anaconda using [pip](https://pypi.python.org/pypi/pip).
## Installing Snowflake SQLAlchemy
The Snowflake SQLAlchemy package can be installed from the public PyPI repository using `pip`:
```shell
pip install --upgrade snowflake-sqlalchemy
```
`pip` automatically installs all required modules, including the Snowflake Connector for Python.
## Verifying Your Installation
1. Create a file (e.g. `validate.py`) that contains the following Python sample code,
which connects to Snowflake and displays the Snowflake version:
```python
from sqlalchemy import create_engine
engine = create_engine(
'snowflake://{user}:{password}@{account}/'.format(
user='<your_user_login_name>',
password='<your_password>',
account='<your_account_name>',
)
)
try:
connection = engine.connect()
results = connection.execute('select current_version()').fetchone()
print(results[0])
finally:
connection.close()
engine.dispose()
```
2. Replace `<your_user_login_name>`, `<your_password>`, and `<your_account_name>` with the appropriate values for your Snowflake account and user.
For more details, see [Connection Parameters](#connection-parameters).
3. Execute the sample code. For example, if you created a file named `validate.py`:
```shell
python validate.py
```
The Snowflake version (e.g. `1.48.0`) should be displayed.
## Parameters and Behavior
As much as possible, Snowflake SQLAlchemy provides compatible functionality for SQLAlchemy applications. For information on using SQLAlchemy, see the [SQLAlchemy documentation](http://docs.sqlalchemy.org/en/latest/).
However, Snowflake SQLAlchemy also provides Snowflake-specific parameters and behavior, which are described in the following sections.
### Connection Parameters
Snowflake SQLAlchemy uses the following syntax for the connection string used to connect to Snowflake and initiate a session:
```python
'snowflake://<user_login_name>:<password>@<account_name>'
```
Where:
- `<user_login_name>` is the login name for your Snowflake user.
- `<password>` is the password for your Snowflake user.
- `<account_name>` is the name of your Snowflake account.
Include the region in the `<account_name>` if applicable, more info is available [here](https://docs.snowflake.com/en/user-guide/connecting.html#your-snowflake-account-name).
You can optionally specify the initial database and schema for the Snowflake session by including them at the end of the connection string, separated by `/`. You can also specify the initial warehouse and role for the session as a parameter string at the end of the connection string:
```python
'snowflake://<user_login_name>:<password>@<account_name>/<database_name>/<schema_name>?warehouse=<warehouse_name>&role=<role_name>'
```
#### Escaping Special Characters such as `%, @` signs in Passwords
As pointed out in [SQLAlchemy](https://docs.sqlalchemy.org/en/14/core/engines.html#escaping-special-characters-such-as-signs-in-passwords), URLs
containing special characters need to be URL encoded to be parsed correctly. This includes the `%, @` signs. Unescaped password containing special
characters could lead to authentication failure.
The encoding for the password can be generated using `urllib.parse`:
```python
import urllib.parse
urllib.parse.quote("kx@% jj5/g")
'kx%40%25%20jj5/g'
```
**Note**: `urllib.parse.quote_plus` may also be used if there is no space in the string, as `urllib.parse.quote_plus` will replace space with `+`.
To create an engine with the proper encodings, either manually constructing the url string by formatting
or taking advantage of the `snowflake.sqlalchemy.URL` helper method:
```python
import urllib.parse
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
quoted_password = urllib.parse.quote("kx@% jj5/g")
# 1. manually constructing an url string
url = f'snowflake://testuser1:{quoted_password}@abc123/testdb/public?warehouse=testwh&role=myrole'
engine = create_engine(url)
# 2. using the snowflake.sqlalchemy.URL helper method
engine = create_engine(URL(
account = 'abc123',
user = 'testuser1',
password = quoted_password,
database = 'testdb',
schema = 'public',
warehouse = 'testwh',
role='myrole',
))
```
**Note**:
After login, the initial database, schema, warehouse and role specified in the connection string can always be changed for the session.
The following example calls the `create_engine` method with the user name `testuser1`, password `0123456`, account name `abc123`, database `testdb`, schema `public`, warehouse `testwh`, and role `myrole`:
```python
from sqlalchemy import create_engine
engine = create_engine(
'snowflake://testuser1:0123456@abc123/testdb/public?warehouse=testwh&role=myrole'
)
```
Other parameters, such as `timezone`, can also be specified as a URI parameter or in `connect_args` parameters. For example:
```python
from sqlalchemy import create_engine
engine = create_engine(
'snowflake://testuser1:0123456@abc123/testdb/public?warehouse=testwh&role=myrole',
connect_args={
'timezone': 'America/Los_Angeles',
}
)
```
For convenience, you can use the `snowflake.sqlalchemy.URL` method to construct the connection string and connect to the database. The following example constructs the same connection string from the previous example:
```python
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
engine = create_engine(URL(
account = 'abc123',
user = 'testuser1',
password = '0123456',
database = 'testdb',
schema = 'public',
warehouse = 'testwh',
role='myrole',
timezone = 'America/Los_Angeles',
))
```
#### Using a proxy server
Use the supported environment variables, `HTTPS_PROXY`, `HTTP_PROXY` and `NO_PROXY` to configure a proxy server.
### Opening and Closing Connection
Open a connection by executing `engine.connect()`; avoid using `engine.execute()`. Make certain to close the connection by executing `connection.close()` before
`engine.dispose()`; otherwise, the Python Garbage collector removes the resources required to communicate with Snowflake, preventing the Python connector from closing the session properly.
```python
# Avoid this.
engine = create_engine(...)
engine.execute(<SQL>)
engine.dispose()
# Better.
engine = create_engine(...)
connection = engine.connect()
try:
connection.execute(text(<SQL>))
finally:
connection.close()
engine.dispose()
# Best
try:
with engine.connect() as connection:
connection.execute(text(<SQL>))
# or
connection.exec_driver_sql(<SQL>)
finally:
engine.dispose()
```
### Auto-increment Behavior
Auto-incrementing a value requires the `Sequence` object. Include the `Sequence` object in the primary key column to automatically increment the value as each new record is inserted. For example:
```python
t = Table('mytable', metadata,
Column('id', Integer, Sequence('id_seq'), primary_key=True),
Column(...), ...
)
```
### Object Name Case Handling
Snowflake stores all case-insensitive object names in uppercase text. In contrast, SQLAlchemy considers all lowercase object names to be case-insensitive. Snowflake SQLAlchemy converts the object name case during schema-level communication, i.e. during table and index reflection. If you use uppercase object names, SQLAlchemy assumes they are case-sensitive and encloses the names with quotes. This behavior will cause mismatches against data dictionary data received from Snowflake, so unless identifier names have been truly created as case sensitive using quotes, e.g., `"TestDb"`, all lowercase names should be used on the SQLAlchemy side.
### Index Support
Indexes are supported only for Hybrid Tables in Snowflake SQLAlchemy. For more details on limitations and use cases, refer to the [Create Index documentation](https://docs.snowflake.com/en/sql-reference/constraints-indexes.html). You can create an index using the following methods:
#### Single Column Index
You can create a single column index by setting the `index=True` parameter on the column or by explicitly defining an `Index` object.
```python
hybrid_test_table_1 = HybridTable(
"table_name",
metadata,
Column("column1", Integer, primary_key=True),
Column("column2", String, index=True),
Index("index_1", "column1", "column2")
)
metadata.create_all(engine_testaccount)
```
#### Multi-Column Index
For multi-column indexes, you define the `Index` object specifying the columns that should be indexed.
```python
hybrid_test_table_1 = HybridTable(
"table_name",
metadata,
Column("column1", Integer, primary_key=True),
Column("column2", String),
Index("index_1", "column1", "column2")
)
metadata.create_all(engine_testaccount)
```
### Numpy Data Type Support
Snowflake SQLAlchemy supports binding and fetching `NumPy` data types. Binding is always supported. To enable fetching `NumPy` data types, add `numpy=True` to the connection parameters.
The following example shows the round trip of `numpy.datetime64` data:
```python
import numpy as np
import pandas as pd
engine = create_engine(URL(
account = 'abc123',
user = 'testuser1',
password = 'pass',
database = 'db',
schema = 'public',
warehouse = 'testwh',
role='myrole',
numpy=True,
))
specific_date = np.datetime64('2016-03-04T12:03:05.123456789Z')
with engine.connect() as connection:
connection.exec_driver_sql(
"CREATE OR REPLACE TABLE ts_tbl(c1 TIMESTAMP_NTZ)")
connection.exec_driver_sql(
"INSERT INTO ts_tbl(c1) values(%s)", (specific_date,)
)
df = pd.read_sql_query("SELECT * FROM ts_tbl", connection)
assert df.c1.values[0] == specific_date
```
The following `NumPy` data types are supported:
- numpy.int64
- numpy.float64
- numpy.datatime64
### Cache Column Metadata
SQLAlchemy provides [the runtime inspection API](http://docs.sqlalchemy.org/en/latest/core/inspection.html) to get the runtime information about the various objects. One of the common use case is get all tables and their column metadata in a schema in order to construct a schema catalog. For example, [alembic](http://alembic.zzzcomputing.com/) on top of SQLAlchemy manages database schema migrations. A pseudo code flow is as follows:
```python
inspector = inspect(engine)
schema = inspector.default_schema_name
for table_name in inspector.get_table_names(schema):
column_metadata = inspector.get_columns(table_name, schema)
primary_keys = inspector.get_pk_constraint(table_name, schema)
foreign_keys = inspector.get_foreign_keys(table_name, schema)
...
```
In this flow, a potential problem is it may take quite a while as queries run on each table. The results are cached but getting column metadata is expensive.
To mitigate the problem, Snowflake SQLAlchemy takes a flag `cache_column_metadata=True` such that all of column metadata for all tables are cached when `get_table_names` is called and the rest of `get_columns`, `get_primary_keys` and `get_foreign_keys` can take advantage of the cache.
```python
engine = create_engine(URL(
account = 'abc123',
user = 'testuser1',
password = 'pass',
database = 'db',
schema = 'public',
warehouse = 'testwh',
role='myrole',
cache_column_metadata=True,
))
```
Note that this flag has been deprecated, as our caching now uses the built-in SQLAlchemy reflection cache, the flag has been removed, but caching has been improved and if possible extra data will be fetched and cached.
### VARIANT, ARRAY and OBJECT Support
Snowflake SQLAlchemy supports fetching `VARIANT`, `ARRAY` and `OBJECT` data types. All types are converted into `str` in Python so that you can convert them to native data types using `json.loads`.
This example shows how to create a table including `VARIANT`, `ARRAY`, and `OBJECT` data type columns.
```python
from snowflake.sqlalchemy import (VARIANT, ARRAY, OBJECT)
t = Table('my_semi_strucutred_datatype_table', metadata,
Column('va', VARIANT),
Column('ob', OBJECT),
Column('ar', ARRAY))
metdata.create_all(engine)
```
In order to retrieve `VARIANT`, `ARRAY`, and `OBJECT` data type columns and convert them to the native Python data types, fetch data and call the `json.loads` method as follows:
```python
import json
connection = engine.connect()
results = connection.execute(select([t])
row = results.fetchone()
data_variant = json.loads(row[0])
data_object = json.loads(row[1])
data_array = json.loads(row[2])
```
### Structured Data Types Support
This module defines custom SQLAlchemy types for Snowflake structured data, specifically for **Iceberg tables**.
The types —**MAP**, **OBJECT**, and **ARRAY**— allow you to store complex data structures in your SQLAlchemy models.
For detailed information, refer to the Snowflake [Structured data types](https://docs.snowflake.com/en/sql-reference/data-types-structured) documentation.
---
#### MAP
The `MAP` type represents a collection of key-value pairs, where each key and value can have different types.
- **Key Type**: The type of the keys (e.g., `TEXT`, `NUMBER`).
- **Value Type**: The type of the values (e.g., `TEXT`, `NUMBER`).
- **Not Null**: Whether `NULL` values are allowed (default is `False`).
*Example Usage*
```python
IcebergTable(
table_name,
metadata,
Column("id", Integer, primary_key=True),
Column("map_col", MAP(NUMBER(10, 0), TEXT(16777216))),
external_volume="external_volume",
base_location="base_location",
)
```
#### OBJECT
The `OBJECT` type represents a semi-structured object with named fields. Each field can have a specific type, and you can also specify whether each field is nullable.
- **Items Types**: A dictionary of field names and their types. The type can optionally include a nullable flag (`True` for not nullable, `False` for nullable, default is `False`).
*Example Usage*
```python
IcebergTable(
table_name,
metadata,
Column("id", Integer, primary_key=True),
Column(
"object_col",
OBJECT(key1=(TEXT(16777216), False), key2=(NUMBER(10, 0), False)),
OBJECT(key1=TEXT(16777216), key2=NUMBER(10, 0)), # Without nullable flag
),
external_volume="external_volume",
base_location="base_location",
)
```
#### ARRAY
The `ARRAY` type represents an ordered list of values, where each element has the same type. The type of the elements is defined when creating the array.
- **Value Type**: The type of the elements in the array (e.g., `TEXT`, `NUMBER`).
- **Not Null**: Whether `NULL` values are allowed (default is `False`).
*Example Usage*
```python
IcebergTable(
table_name,
metadata,
Column("id", Integer, primary_key=True),
Column("array_col", ARRAY(TEXT(16777216))),
external_volume="external_volume",
base_location="base_location",
)
```
### CLUSTER BY Support
Snowflake SQLAchemy supports the `CLUSTER BY` parameter for tables. For information about the parameter, see :doc:`/sql-reference/sql/create-table`.
This example shows how to create a table with two columns, `id` and `name`, as the clustering keys:
```python
t = Table('myuser', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
snowflake_clusterby=['id', 'name', text('id > 5')], ...
)
metadata.create_all(engine)
```
### Alembic Support
[Alembic](http://alembic.zzzcomputing.com) is a database migration tool on top of `SQLAlchemy`. Snowflake SQLAlchemy works by adding the following code to `alembic/env.py` so that Alembic can recognize Snowflake SQLAlchemy.
```python
from alembic.ddl.impl import DefaultImpl
class SnowflakeImpl(DefaultImpl):
__dialect__ = 'snowflake'
```
See [Alembic Documentation](http://alembic.zzzcomputing.com/) for general usage.
### Key Pair Authentication Support
Snowflake SQLAlchemy supports key pair authentication by leveraging its Snowflake Connector for Python underpinnings. See [Using Key Pair Authentication](https://docs.snowflake.net/manuals/user-guide/python-connector-example.html#using-key-pair-authentication) for steps to create the private and public keys.
The private key parameter is passed through `connect_args` as follows:
```python
...
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives.asymmetric import dsa
from cryptography.hazmat.primitives import serialization
with open("rsa_key.p8", "rb") as key:
p_key= serialization.load_pem_private_key(
key.read(),
password=os.environ['PRIVATE_KEY_PASSPHRASE'].encode(),
backend=default_backend()
)
pkb = p_key.private_bytes(
encoding=serialization.Encoding.DER,
format=serialization.PrivateFormat.PKCS8,
encryption_algorithm=serialization.NoEncryption())
engine = create_engine(URL(
account='abc123',
user='testuser1',
),
connect_args={
'private_key': pkb,
},
)
```
Where `PRIVATE_KEY_PASSPHRASE` is a passphrase to decrypt the private key file, `rsa_key.p8`.
Currently a private key parameter is not accepted by the `snowflake.sqlalchemy.URL` method.
### Merge Command Support
Snowflake SQLAlchemy supports upserting with its `MergeInto` custom expression.
See [Merge](https://docs.snowflake.net/manuals/sql-reference/sql/merge.html) for full documentation.
Use it as follows:
```python
from sqlalchemy.orm import sessionmaker
from sqlalchemy import MetaData, create_engine
from snowflake.sqlalchemy import MergeInto
engine = create_engine(db.url, echo=False)
session = sessionmaker(bind=engine)()
connection = engine.connect()
meta = MetaData()
meta.reflect(bind=session.bind)
t1 = meta.tables['t1']
t2 = meta.tables['t2']
merge = MergeInto(target=t1, source=t2, on=t1.c.t1key == t2.c.t2key)
merge.when_matched_then_delete().where(t2.c.marked == 1)
merge.when_matched_then_update().where(t2.c.isnewstatus == 1).values(val = t2.c.newval, status=t2.c.newstatus)
merge.when_matched_then_update().values(val=t2.c.newval)
merge.when_not_matched_then_insert().values(val=t2.c.newval, status=t2.c.newstatus)
connection.execute(merge)
```
### CopyIntoStorage Support
Snowflake SQLAlchemy supports saving tables/query results into different stages, as well as into Azure Containers and
AWS buckets with its custom `CopyIntoStorage` expression. See [Copy into](https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-location.html)
for full documentation.
Use it as follows:
```python
from sqlalchemy.orm import sessionmaker
from sqlalchemy import MetaData, create_engine
from snowflake.sqlalchemy import CopyIntoStorage, AWSBucket, CSVFormatter
engine = create_engine(db.url, echo=False)
session = sessionmaker(bind=engine)()
connection = engine.connect()
meta = MetaData()
meta.reflect(bind=session.bind)
users = meta.tables['users']
copy_into = CopyIntoStorage(from_=users,
into=AWSBucket.from_uri('s3://my_private_backup').encryption_aws_sse_kms('1234abcd-12ab-34cd-56ef-1234567890ab'),
formatter=CSVFormatter().null_if(['null', 'Null']))
connection.execute(copy_into)
```
### Iceberg Table with Snowflake Catalog support
Snowflake SQLAlchemy supports Iceberg Tables with the Snowflake Catalog, along with various related parameters. For detailed information about Iceberg Tables, refer to the Snowflake [CREATE ICEBERG](https://docs.snowflake.com/en/sql-reference/sql/create-iceberg-table-snowflake) documentation.
To create an Iceberg Table using Snowflake SQLAlchemy, you can define the table using the SQLAlchemy Core syntax as follows:
```python
table = IcebergTable(
"myuser",
metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
external_volume=external_volume_name,
base_location="my_iceberg_table",
as_query="SELECT * FROM table"
)
```
Alternatively, you can define the table using a declarative approach:
```python
class MyUser(Base):
__tablename__ = "myuser"
@classmethod
def __table_cls__(cls, name, metadata, *arg, **kw):
return IcebergTable(name, metadata, *arg, **kw)
__table_args__ = {
"external_volume": "my_external_volume",
"base_location": "my_iceberg_table",
"as_query": "SELECT * FROM table",
}
id = Column(Integer, primary_key=True)
name = Column(String)
```
### Hybrid Table support
Snowflake SQLAlchemy supports Hybrid Tables with indexes. For detailed information, refer to the Snowflake [CREATE HYBRID TABLE](https://docs.snowflake.com/en/sql-reference/sql/create-hybrid-table) documentation.
To create a Hybrid Table and add an index, you can use the SQLAlchemy Core syntax as follows:
```python
table = HybridTable(
"myuser",
metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
Index("idx_name", "name")
)
```
Alternatively, you can define the table using the declarative approach:
```python
class MyUser(Base):
__tablename__ = "myuser"
@classmethod
def __table_cls__(cls, name, metadata, *arg, **kw):
return HybridTable(name, metadata, *arg, **kw)
__table_args__ = (
Index("idx_name", "name"),
)
id = Column(Integer, primary_key=True)
name = Column(String)
```
### Dynamic Tables support
Snowflake SQLAlchemy supports Dynamic Tables. For detailed information, refer to the Snowflake [CREATE DYNAMIC TABLE](https://docs.snowflake.com/en/sql-reference/sql/create-dynamic-table) documentation.
To create a Dynamic Table, you can use the SQLAlchemy Core syntax as follows:
```python
dynamic_test_table_1 = DynamicTable(
"dynamic_MyUser",
metadata,
Column("id", Integer),
Column("name", String),
target_lag=(1, TimeUnit.HOURS), # Additionally, you can use SnowflakeKeyword.DOWNSTREAM
warehouse='test_wh',
refresh_mode=SnowflakeKeyword.FULL,
as_query="SELECT id, name from MyUser;"
)
```
Alternatively, you can define a table without columns using the SQLAlchemy `select()` construct:
```python
dynamic_test_table_1 = DynamicTable(
"dynamic_MyUser",
metadata,
target_lag=(1, TimeUnit.HOURS),
warehouse='test_wh',
refresh_mode=SnowflakeKeyword.FULL,
as_query=select(MyUser.id, MyUser.name)
)
```
### Notes
- Defining a primary key in a Dynamic Table is not supported, meaning declarative tables don’t support Dynamic Tables.
- When using the `as_query` parameter with a string, you must explicitly define the columns. However, if you use the SQLAlchemy `select()` construct, you don’t need to explicitly define the columns.
- Direct data insertion into Dynamic Tables is not supported.
## Support
Feel free to file an issue or submit a PR here for general cases. For official support, contact Snowflake support at:
<https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge>
Raw data
{
"_id": null,
"home_page": null,
"name": "snowflake-sqlalchemy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "Snowflake, analytics, cloud, database, db, warehouse",
"author": null,
"author_email": "\"Snowflake Inc.\" <triage-snowpark-python-api-dl@snowflake.com>",
"download_url": "https://files.pythonhosted.org/packages/15/fc/979de8c2774d7b36ba4e8fd66c7124a62b587ba9222525a806b72e9564d0/snowflake_sqlalchemy-1.7.2.tar.gz",
"platform": null,
"description": "# Snowflake SQLAlchemy\n\n[![Build and Test](https://github.com/snowflakedb/snowflake-sqlalchemy/actions/workflows/build_test.yml/badge.svg)](https://github.com/snowflakedb/snowflake-sqlalchemy/actions/workflows/build_test.yml)\n[![codecov](https://codecov.io/gh/snowflakedb/snowflake-sqlalchemy/branch/main/graph/badge.svg)](https://codecov.io/gh/snowflakedb/snowflake-sqlalchemy)\n[![PyPi](https://img.shields.io/pypi/v/snowflake-sqlalchemy.svg)](https://pypi.python.org/pypi/snowflake-sqlalchemy/)\n[![License Apache-2.0](https://img.shields.io/:license-Apache%202-brightgreen.svg)](http://www.apache.org/licenses/LICENSE-2.0.txt)\n[![Codestyle Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\nSnowflake SQLAlchemy runs on the top of the Snowflake Connector for Python as a [dialect](http://docs.sqlalchemy.org/en/latest/dialects/) to bridge a Snowflake database and SQLAlchemy applications.\n\n\n| :exclamation: | For production-affecting or urgent issues related to the connector, please [create a case with Snowflake Support](https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge). |\n|---------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n\n\n## Prerequisites\n\n### Snowflake Connector for Python\n\nThe only requirement for Snowflake SQLAlchemy is the Snowflake Connector for Python; however, the connector does not need to be installed because installing Snowflake SQLAlchemy automatically installs the connector.\n\n### Data Analytics and Web Application Frameworks (Optional)\n\nSnowflake SQLAlchemy can be used with [Pandas](http://pandas.pydata.org/), [Jupyter](http://jupyter.org/) and [Pyramid](http://www.pylonsproject.org/), which provide higher levels of application frameworks for data analytics and web applications. However, building a working environment from scratch is not a trivial task, particularly for novice users. Installing the frameworks requires C compilers and tools, and choosing the right tools and versions is a hurdle that might deter users from using Python applications.\n\nAn easier way to build an environment is through [Anaconda](https://www.continuum.io/why-anaconda), which provides a complete, precompiled technology stack for all users, including non-Python experts such as data analysts and students. For Anaconda installation instructions, see the [Anaconda install documentation](https://docs.continuum.io/anaconda/install). The Snowflake SQLAlchemy package can then be installed on top of Anaconda using [pip](https://pypi.python.org/pypi/pip).\n\n## Installing Snowflake SQLAlchemy\n\nThe Snowflake SQLAlchemy package can be installed from the public PyPI repository using `pip`:\n\n```shell\npip install --upgrade snowflake-sqlalchemy\n```\n\n`pip` automatically installs all required modules, including the Snowflake Connector for Python.\n\n## Verifying Your Installation\n\n1. Create a file (e.g. `validate.py`) that contains the following Python sample code,\n which connects to Snowflake and displays the Snowflake version:\n\n ```python\n from sqlalchemy import create_engine\n\n engine = create_engine(\n 'snowflake://{user}:{password}@{account}/'.format(\n user='<your_user_login_name>',\n password='<your_password>',\n account='<your_account_name>',\n )\n )\n try:\n connection = engine.connect()\n results = connection.execute('select current_version()').fetchone()\n print(results[0])\n finally:\n connection.close()\n engine.dispose()\n ```\n\n2. Replace `<your_user_login_name>`, `<your_password>`, and `<your_account_name>` with the appropriate values for your Snowflake account and user.\n\n For more details, see [Connection Parameters](#connection-parameters).\n\n3. Execute the sample code. For example, if you created a file named `validate.py`:\n\n ```shell\n python validate.py\n ```\n\n The Snowflake version (e.g. `1.48.0`) should be displayed.\n\n## Parameters and Behavior\n\nAs much as possible, Snowflake SQLAlchemy provides compatible functionality for SQLAlchemy applications. For information on using SQLAlchemy, see the [SQLAlchemy documentation](http://docs.sqlalchemy.org/en/latest/).\n\nHowever, Snowflake SQLAlchemy also provides Snowflake-specific parameters and behavior, which are described in the following sections.\n\n### Connection Parameters\n\nSnowflake SQLAlchemy uses the following syntax for the connection string used to connect to Snowflake and initiate a session:\n\n```python\n'snowflake://<user_login_name>:<password>@<account_name>'\n```\n\nWhere:\n\n- `<user_login_name>` is the login name for your Snowflake user.\n- `<password>` is the password for your Snowflake user.\n- `<account_name>` is the name of your Snowflake account.\n\nInclude the region in the `<account_name>` if applicable, more info is available [here](https://docs.snowflake.com/en/user-guide/connecting.html#your-snowflake-account-name).\n\nYou can optionally specify the initial database and schema for the Snowflake session by including them at the end of the connection string, separated by `/`. You can also specify the initial warehouse and role for the session as a parameter string at the end of the connection string:\n\n```python\n'snowflake://<user_login_name>:<password>@<account_name>/<database_name>/<schema_name>?warehouse=<warehouse_name>&role=<role_name>'\n```\n\n#### Escaping Special Characters such as `%, @` signs in Passwords\n\nAs pointed out in [SQLAlchemy](https://docs.sqlalchemy.org/en/14/core/engines.html#escaping-special-characters-such-as-signs-in-passwords), URLs\ncontaining special characters need to be URL encoded to be parsed correctly. This includes the `%, @` signs. Unescaped password containing special\ncharacters could lead to authentication failure.\n\nThe encoding for the password can be generated using `urllib.parse`:\n\n```python\nimport urllib.parse\nurllib.parse.quote(\"kx@% jj5/g\")\n'kx%40%25%20jj5/g'\n```\n\n**Note**: `urllib.parse.quote_plus` may also be used if there is no space in the string, as `urllib.parse.quote_plus` will replace space with `+`.\n\nTo create an engine with the proper encodings, either manually constructing the url string by formatting\nor taking advantage of the `snowflake.sqlalchemy.URL` helper method:\n\n```python\nimport urllib.parse\nfrom snowflake.sqlalchemy import URL\nfrom sqlalchemy import create_engine\n\nquoted_password = urllib.parse.quote(\"kx@% jj5/g\")\n\n# 1. manually constructing an url string\nurl = f'snowflake://testuser1:{quoted_password}@abc123/testdb/public?warehouse=testwh&role=myrole'\nengine = create_engine(url)\n\n# 2. using the snowflake.sqlalchemy.URL helper method\nengine = create_engine(URL(\n account = 'abc123',\n user = 'testuser1',\n password = quoted_password,\n database = 'testdb',\n schema = 'public',\n warehouse = 'testwh',\n role='myrole',\n))\n```\n\n**Note**:\nAfter login, the initial database, schema, warehouse and role specified in the connection string can always be changed for the session.\n\nThe following example calls the `create_engine` method with the user name `testuser1`, password `0123456`, account name `abc123`, database `testdb`, schema `public`, warehouse `testwh`, and role `myrole`:\n\n```python\nfrom sqlalchemy import create_engine\nengine = create_engine(\n 'snowflake://testuser1:0123456@abc123/testdb/public?warehouse=testwh&role=myrole'\n)\n```\n\nOther parameters, such as `timezone`, can also be specified as a URI parameter or in `connect_args` parameters. For example:\n\n```python\nfrom sqlalchemy import create_engine\nengine = create_engine(\n 'snowflake://testuser1:0123456@abc123/testdb/public?warehouse=testwh&role=myrole',\n connect_args={\n 'timezone': 'America/Los_Angeles',\n }\n)\n```\n\nFor convenience, you can use the `snowflake.sqlalchemy.URL` method to construct the connection string and connect to the database. The following example constructs the same connection string from the previous example:\n\n```python\nfrom snowflake.sqlalchemy import URL\nfrom sqlalchemy import create_engine\n\nengine = create_engine(URL(\n account = 'abc123',\n user = 'testuser1',\n password = '0123456',\n database = 'testdb',\n schema = 'public',\n warehouse = 'testwh',\n role='myrole',\n timezone = 'America/Los_Angeles',\n))\n```\n\n#### Using a proxy server\n\nUse the supported environment variables, `HTTPS_PROXY`, `HTTP_PROXY` and `NO_PROXY` to configure a proxy server.\n\n### Opening and Closing Connection\n\nOpen a connection by executing `engine.connect()`; avoid using `engine.execute()`. Make certain to close the connection by executing `connection.close()` before\n`engine.dispose()`; otherwise, the Python Garbage collector removes the resources required to communicate with Snowflake, preventing the Python connector from closing the session properly.\n\n```python\n# Avoid this.\nengine = create_engine(...)\nengine.execute(<SQL>)\nengine.dispose()\n\n# Better.\nengine = create_engine(...)\nconnection = engine.connect()\ntry:\n connection.execute(text(<SQL>))\nfinally:\n connection.close()\n engine.dispose()\n\n# Best\ntry:\n with engine.connect() as connection:\n connection.execute(text(<SQL>))\n # or\n connection.exec_driver_sql(<SQL>)\nfinally:\n engine.dispose()\n```\n\n### Auto-increment Behavior\n\nAuto-incrementing a value requires the `Sequence` object. Include the `Sequence` object in the primary key column to automatically increment the value as each new record is inserted. For example:\n\n```python\nt = Table('mytable', metadata,\n Column('id', Integer, Sequence('id_seq'), primary_key=True),\n Column(...), ...\n)\n```\n\n### Object Name Case Handling\n\nSnowflake stores all case-insensitive object names in uppercase text. In contrast, SQLAlchemy considers all lowercase object names to be case-insensitive. Snowflake SQLAlchemy converts the object name case during schema-level communication, i.e. during table and index reflection. If you use uppercase object names, SQLAlchemy assumes they are case-sensitive and encloses the names with quotes. This behavior will cause mismatches against data dictionary data received from Snowflake, so unless identifier names have been truly created as case sensitive using quotes, e.g., `\"TestDb\"`, all lowercase names should be used on the SQLAlchemy side.\n\n### Index Support\n\nIndexes are supported only for Hybrid Tables in Snowflake SQLAlchemy. For more details on limitations and use cases, refer to the [Create Index documentation](https://docs.snowflake.com/en/sql-reference/constraints-indexes.html). You can create an index using the following methods:\n\n#### Single Column Index\n\nYou can create a single column index by setting the `index=True` parameter on the column or by explicitly defining an `Index` object.\n\n```python\nhybrid_test_table_1 = HybridTable(\n \"table_name\",\n metadata,\n Column(\"column1\", Integer, primary_key=True),\n Column(\"column2\", String, index=True),\n Index(\"index_1\", \"column1\", \"column2\")\n)\n\nmetadata.create_all(engine_testaccount)\n```\n\n#### Multi-Column Index\n\nFor multi-column indexes, you define the `Index` object specifying the columns that should be indexed.\n\n```python\nhybrid_test_table_1 = HybridTable(\n \"table_name\",\n metadata,\n Column(\"column1\", Integer, primary_key=True),\n Column(\"column2\", String),\n Index(\"index_1\", \"column1\", \"column2\")\n)\n\nmetadata.create_all(engine_testaccount)\n```\n\n### Numpy Data Type Support\n\nSnowflake SQLAlchemy supports binding and fetching `NumPy` data types. Binding is always supported. To enable fetching `NumPy` data types, add `numpy=True` to the connection parameters.\n\nThe following example shows the round trip of `numpy.datetime64` data:\n\n```python\nimport numpy as np\nimport pandas as pd\nengine = create_engine(URL(\n account = 'abc123',\n user = 'testuser1',\n password = 'pass',\n database = 'db',\n schema = 'public',\n warehouse = 'testwh',\n role='myrole',\n numpy=True,\n))\n\nspecific_date = np.datetime64('2016-03-04T12:03:05.123456789Z')\n\nwith engine.connect() as connection:\n connection.exec_driver_sql(\n \"CREATE OR REPLACE TABLE ts_tbl(c1 TIMESTAMP_NTZ)\")\n connection.exec_driver_sql(\n \"INSERT INTO ts_tbl(c1) values(%s)\", (specific_date,)\n )\n df = pd.read_sql_query(\"SELECT * FROM ts_tbl\", connection)\n assert df.c1.values[0] == specific_date\n```\n\nThe following `NumPy` data types are supported:\n\n- numpy.int64\n- numpy.float64\n- numpy.datatime64\n\n### Cache Column Metadata\n\nSQLAlchemy provides [the runtime inspection API](http://docs.sqlalchemy.org/en/latest/core/inspection.html) to get the runtime information about the various objects. One of the common use case is get all tables and their column metadata in a schema in order to construct a schema catalog. For example, [alembic](http://alembic.zzzcomputing.com/) on top of SQLAlchemy manages database schema migrations. A pseudo code flow is as follows:\n\n```python\ninspector = inspect(engine)\nschema = inspector.default_schema_name\nfor table_name in inspector.get_table_names(schema):\n column_metadata = inspector.get_columns(table_name, schema)\n primary_keys = inspector.get_pk_constraint(table_name, schema)\n foreign_keys = inspector.get_foreign_keys(table_name, schema)\n ...\n```\n\nIn this flow, a potential problem is it may take quite a while as queries run on each table. The results are cached but getting column metadata is expensive.\n\nTo mitigate the problem, Snowflake SQLAlchemy takes a flag `cache_column_metadata=True` such that all of column metadata for all tables are cached when `get_table_names` is called and the rest of `get_columns`, `get_primary_keys` and `get_foreign_keys` can take advantage of the cache.\n\n```python\nengine = create_engine(URL(\n account = 'abc123',\n user = 'testuser1',\n password = 'pass',\n database = 'db',\n schema = 'public',\n warehouse = 'testwh',\n role='myrole',\n cache_column_metadata=True,\n))\n```\n\nNote that this flag has been deprecated, as our caching now uses the built-in SQLAlchemy reflection cache, the flag has been removed, but caching has been improved and if possible extra data will be fetched and cached.\n\n### VARIANT, ARRAY and OBJECT Support\n\nSnowflake SQLAlchemy supports fetching `VARIANT`, `ARRAY` and `OBJECT` data types. All types are converted into `str` in Python so that you can convert them to native data types using `json.loads`.\n\nThis example shows how to create a table including `VARIANT`, `ARRAY`, and `OBJECT` data type columns.\n\n```python\nfrom snowflake.sqlalchemy import (VARIANT, ARRAY, OBJECT)\n\nt = Table('my_semi_strucutred_datatype_table', metadata,\n Column('va', VARIANT),\n Column('ob', OBJECT),\n Column('ar', ARRAY))\nmetdata.create_all(engine)\n```\n\nIn order to retrieve `VARIANT`, `ARRAY`, and `OBJECT` data type columns and convert them to the native Python data types, fetch data and call the `json.loads` method as follows:\n\n```python\nimport json\nconnection = engine.connect()\nresults = connection.execute(select([t])\nrow = results.fetchone()\ndata_variant = json.loads(row[0])\ndata_object = json.loads(row[1])\ndata_array = json.loads(row[2])\n```\n\n### Structured Data Types Support\n\nThis module defines custom SQLAlchemy types for Snowflake structured data, specifically for **Iceberg tables**.\nThe types \u2014**MAP**, **OBJECT**, and **ARRAY**\u2014 allow you to store complex data structures in your SQLAlchemy models.\nFor detailed information, refer to the Snowflake [Structured data types](https://docs.snowflake.com/en/sql-reference/data-types-structured) documentation.\n\n---\n\n#### MAP\n\nThe `MAP` type represents a collection of key-value pairs, where each key and value can have different types.\n\n- **Key Type**: The type of the keys (e.g., `TEXT`, `NUMBER`).\n- **Value Type**: The type of the values (e.g., `TEXT`, `NUMBER`).\n- **Not Null**: Whether `NULL` values are allowed (default is `False`).\n\n*Example Usage*\n\n```python\nIcebergTable(\n table_name,\n metadata,\n Column(\"id\", Integer, primary_key=True),\n Column(\"map_col\", MAP(NUMBER(10, 0), TEXT(16777216))),\n external_volume=\"external_volume\",\n base_location=\"base_location\",\n)\n```\n\n#### OBJECT\n\nThe `OBJECT` type represents a semi-structured object with named fields. Each field can have a specific type, and you can also specify whether each field is nullable.\n\n- **Items Types**: A dictionary of field names and their types. The type can optionally include a nullable flag (`True` for not nullable, `False` for nullable, default is `False`).\n\n*Example Usage*\n\n```python\nIcebergTable(\n table_name,\n metadata,\n Column(\"id\", Integer, primary_key=True),\n Column(\n \"object_col\",\n OBJECT(key1=(TEXT(16777216), False), key2=(NUMBER(10, 0), False)),\n OBJECT(key1=TEXT(16777216), key2=NUMBER(10, 0)), # Without nullable flag\n ),\n external_volume=\"external_volume\",\n base_location=\"base_location\",\n)\n```\n\n#### ARRAY\n\nThe `ARRAY` type represents an ordered list of values, where each element has the same type. The type of the elements is defined when creating the array.\n\n- **Value Type**: The type of the elements in the array (e.g., `TEXT`, `NUMBER`).\n- **Not Null**: Whether `NULL` values are allowed (default is `False`).\n\n*Example Usage*\n\n```python\nIcebergTable(\n table_name,\n metadata,\n Column(\"id\", Integer, primary_key=True),\n Column(\"array_col\", ARRAY(TEXT(16777216))),\n external_volume=\"external_volume\",\n base_location=\"base_location\",\n)\n```\n\n\n### CLUSTER BY Support\n\nSnowflake SQLAchemy supports the `CLUSTER BY` parameter for tables. For information about the parameter, see :doc:`/sql-reference/sql/create-table`.\n\nThis example shows how to create a table with two columns, `id` and `name`, as the clustering keys:\n\n```python\nt = Table('myuser', metadata,\n Column('id', Integer, primary_key=True),\n Column('name', String),\n snowflake_clusterby=['id', 'name', text('id > 5')], ...\n)\nmetadata.create_all(engine)\n```\n\n### Alembic Support\n\n[Alembic](http://alembic.zzzcomputing.com) is a database migration tool on top of `SQLAlchemy`. Snowflake SQLAlchemy works by adding the following code to `alembic/env.py` so that Alembic can recognize Snowflake SQLAlchemy.\n\n```python\nfrom alembic.ddl.impl import DefaultImpl\n\nclass SnowflakeImpl(DefaultImpl):\n __dialect__ = 'snowflake'\n```\n\nSee [Alembic Documentation](http://alembic.zzzcomputing.com/) for general usage.\n\n### Key Pair Authentication Support\n\nSnowflake SQLAlchemy supports key pair authentication by leveraging its Snowflake Connector for Python underpinnings. See [Using Key Pair Authentication](https://docs.snowflake.net/manuals/user-guide/python-connector-example.html#using-key-pair-authentication) for steps to create the private and public keys.\n\nThe private key parameter is passed through `connect_args` as follows:\n\n```python\n...\nfrom snowflake.sqlalchemy import URL\nfrom sqlalchemy import create_engine\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives.asymmetric import rsa\nfrom cryptography.hazmat.primitives.asymmetric import dsa\nfrom cryptography.hazmat.primitives import serialization\n\nwith open(\"rsa_key.p8\", \"rb\") as key:\n p_key= serialization.load_pem_private_key(\n key.read(),\n password=os.environ['PRIVATE_KEY_PASSPHRASE'].encode(),\n backend=default_backend()\n )\n\npkb = p_key.private_bytes(\n encoding=serialization.Encoding.DER,\n format=serialization.PrivateFormat.PKCS8,\n encryption_algorithm=serialization.NoEncryption())\n\nengine = create_engine(URL(\n account='abc123',\n user='testuser1',\n ),\n connect_args={\n 'private_key': pkb,\n },\n )\n```\n\nWhere `PRIVATE_KEY_PASSPHRASE` is a passphrase to decrypt the private key file, `rsa_key.p8`.\n\nCurrently a private key parameter is not accepted by the `snowflake.sqlalchemy.URL` method.\n\n### Merge Command Support\n\nSnowflake SQLAlchemy supports upserting with its `MergeInto` custom expression.\nSee [Merge](https://docs.snowflake.net/manuals/sql-reference/sql/merge.html) for full documentation.\n\nUse it as follows:\n\n```python\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy import MetaData, create_engine\nfrom snowflake.sqlalchemy import MergeInto\n\nengine = create_engine(db.url, echo=False)\nsession = sessionmaker(bind=engine)()\nconnection = engine.connect()\n\nmeta = MetaData()\nmeta.reflect(bind=session.bind)\nt1 = meta.tables['t1']\nt2 = meta.tables['t2']\n\nmerge = MergeInto(target=t1, source=t2, on=t1.c.t1key == t2.c.t2key)\nmerge.when_matched_then_delete().where(t2.c.marked == 1)\nmerge.when_matched_then_update().where(t2.c.isnewstatus == 1).values(val = t2.c.newval, status=t2.c.newstatus)\nmerge.when_matched_then_update().values(val=t2.c.newval)\nmerge.when_not_matched_then_insert().values(val=t2.c.newval, status=t2.c.newstatus)\nconnection.execute(merge)\n```\n\n### CopyIntoStorage Support\n\nSnowflake SQLAlchemy supports saving tables/query results into different stages, as well as into Azure Containers and\nAWS buckets with its custom `CopyIntoStorage` expression. See [Copy into](https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-location.html)\nfor full documentation.\n\nUse it as follows:\n\n```python\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy import MetaData, create_engine\nfrom snowflake.sqlalchemy import CopyIntoStorage, AWSBucket, CSVFormatter\n\nengine = create_engine(db.url, echo=False)\nsession = sessionmaker(bind=engine)()\nconnection = engine.connect()\n\nmeta = MetaData()\nmeta.reflect(bind=session.bind)\nusers = meta.tables['users']\n\ncopy_into = CopyIntoStorage(from_=users,\n into=AWSBucket.from_uri('s3://my_private_backup').encryption_aws_sse_kms('1234abcd-12ab-34cd-56ef-1234567890ab'),\n formatter=CSVFormatter().null_if(['null', 'Null']))\nconnection.execute(copy_into)\n```\n\n### Iceberg Table with Snowflake Catalog support\n\nSnowflake SQLAlchemy supports Iceberg Tables with the Snowflake Catalog, along with various related parameters. For detailed information about Iceberg Tables, refer to the Snowflake [CREATE ICEBERG](https://docs.snowflake.com/en/sql-reference/sql/create-iceberg-table-snowflake) documentation.\n\nTo create an Iceberg Table using Snowflake SQLAlchemy, you can define the table using the SQLAlchemy Core syntax as follows:\n\n```python\ntable = IcebergTable(\n \"myuser\",\n metadata,\n Column(\"id\", Integer, primary_key=True),\n Column(\"name\", String),\n external_volume=external_volume_name,\n base_location=\"my_iceberg_table\",\n as_query=\"SELECT * FROM table\"\n)\n```\n\nAlternatively, you can define the table using a declarative approach:\n\n```python\nclass MyUser(Base):\n __tablename__ = \"myuser\"\n\n @classmethod\n def __table_cls__(cls, name, metadata, *arg, **kw):\n return IcebergTable(name, metadata, *arg, **kw)\n\n __table_args__ = {\n \"external_volume\": \"my_external_volume\",\n \"base_location\": \"my_iceberg_table\",\n \"as_query\": \"SELECT * FROM table\",\n }\n\n id = Column(Integer, primary_key=True)\n name = Column(String)\n```\n\n### Hybrid Table support\n\nSnowflake SQLAlchemy supports Hybrid Tables with indexes. For detailed information, refer to the Snowflake [CREATE HYBRID TABLE](https://docs.snowflake.com/en/sql-reference/sql/create-hybrid-table) documentation.\n\nTo create a Hybrid Table and add an index, you can use the SQLAlchemy Core syntax as follows:\n\n```python\ntable = HybridTable(\n \"myuser\",\n metadata,\n Column(\"id\", Integer, primary_key=True),\n Column(\"name\", String),\n Index(\"idx_name\", \"name\")\n)\n```\n\nAlternatively, you can define the table using the declarative approach:\n\n```python\nclass MyUser(Base):\n __tablename__ = \"myuser\"\n\n @classmethod\n def __table_cls__(cls, name, metadata, *arg, **kw):\n return HybridTable(name, metadata, *arg, **kw)\n\n __table_args__ = (\n Index(\"idx_name\", \"name\"),\n )\n\n id = Column(Integer, primary_key=True)\n name = Column(String)\n```\n\n### Dynamic Tables support\n\nSnowflake SQLAlchemy supports Dynamic Tables. For detailed information, refer to the Snowflake [CREATE DYNAMIC TABLE](https://docs.snowflake.com/en/sql-reference/sql/create-dynamic-table) documentation.\n\nTo create a Dynamic Table, you can use the SQLAlchemy Core syntax as follows:\n\n```python\ndynamic_test_table_1 = DynamicTable(\n \"dynamic_MyUser\",\n metadata,\n Column(\"id\", Integer),\n Column(\"name\", String),\n target_lag=(1, TimeUnit.HOURS), # Additionally, you can use SnowflakeKeyword.DOWNSTREAM\n warehouse='test_wh',\n refresh_mode=SnowflakeKeyword.FULL,\n as_query=\"SELECT id, name from MyUser;\"\n)\n```\n\nAlternatively, you can define a table without columns using the SQLAlchemy `select()` construct:\n\n```python\ndynamic_test_table_1 = DynamicTable(\n \"dynamic_MyUser\",\n metadata,\n target_lag=(1, TimeUnit.HOURS),\n warehouse='test_wh',\n refresh_mode=SnowflakeKeyword.FULL,\n as_query=select(MyUser.id, MyUser.name)\n)\n```\n\n### Notes\n\n- Defining a primary key in a Dynamic Table is not supported, meaning declarative tables don\u2019t support Dynamic Tables.\n- When using the `as_query` parameter with a string, you must explicitly define the columns. However, if you use the SQLAlchemy `select()` construct, you don\u2019t need to explicitly define the columns.\n- Direct data insertion into Dynamic Tables is not supported.\n\n\n## Support\n\nFeel free to file an issue or submit a PR here for general cases. For official support, contact Snowflake support at:\n<https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge>\n",
"bugtrack_url": null,
"license": null,
"summary": "Snowflake SQLAlchemy Dialect",
"version": "1.7.2",
"project_urls": {
"Changelog": "https://github.com/snowflakedb/snowflake-sqlalchemy/blob/main/DESCRIPTION.md",
"Documentation": "https://docs.snowflake.com/en/user-guide/sqlalchemy.html",
"Homepage": "https://www.snowflake.com/",
"Issues": "https://github.com/snowflakedb/snowflake-sqlalchemy/issues",
"Source": "https://github.com/snowflakedb/snowflake-sqlalchemy"
},
"split_keywords": [
"snowflake",
" analytics",
" cloud",
" database",
" db",
" warehouse"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2fbf0e90a803ebcccf114f3bdfc0439caf3fca5c88412e9e5abf45f8294a9226",
"md5": "a6254149eb490d3844097a35dca73b19",
"sha256": "db4e5350e469adbbda034d6bd1c948c5a3e88994405483ee9a76caf18cbe9958"
},
"downloads": -1,
"filename": "snowflake_sqlalchemy-1.7.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a6254149eb490d3844097a35dca73b19",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 69472,
"upload_time": "2024-12-17T21:38:08",
"upload_time_iso_8601": "2024-12-17T21:38:08.472027Z",
"url": "https://files.pythonhosted.org/packages/2f/bf/0e90a803ebcccf114f3bdfc0439caf3fca5c88412e9e5abf45f8294a9226/snowflake_sqlalchemy-1.7.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "15fc979de8c2774d7b36ba4e8fd66c7124a62b587ba9222525a806b72e9564d0",
"md5": "33d61ecdb00eebe64a35e1ac83318006",
"sha256": "083f9113ce5b7e9fb21ca6d748aee210117f6f2bd767f08415471796fc42ad37"
},
"downloads": -1,
"filename": "snowflake_sqlalchemy-1.7.2.tar.gz",
"has_sig": false,
"md5_digest": "33d61ecdb00eebe64a35e1ac83318006",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 116100,
"upload_time": "2024-12-17T21:38:11",
"upload_time_iso_8601": "2024-12-17T21:38:11.386834Z",
"url": "https://files.pythonhosted.org/packages/15/fc/979de8c2774d7b36ba4e8fd66c7124a62b587ba9222525a806b72e9564d0/snowflake_sqlalchemy-1.7.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-17 21:38:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "snowflakedb",
"github_project": "snowflake-sqlalchemy",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "snowflake-sqlalchemy"
}