snowpark-utilities


Namesnowpark-utilities JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://teddycaulton.xyz
SummaryA helpful package for making snowpark code easier to write and more legible
upload_time2024-03-11 21:46:14
maintainer
docs_urlNone
authorTheodore Caulton
requires_python
licenseMIT
keywords snowpark snowflake data science
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Snowpark-Utilities
## Description
Snowpark Utilities is a set of Python tools aimed at easing much of the repetitive work around using Snowflake's Snowpark API.  these tools aim make it easier to stand up new snowpark sessions or execute sql commands especialy in environments where multiple sessions are needed.  The module contains functionality for users who want to directly feed credentails for authentication or those who are working with a tool like AWS secrets where credentils might be stored.  The aim of this project is to make it faster and cleaner to stand up new snowpark projects without copy and pasting code from similar endeavors or combing through documentation.

## __init__
if utilizing aws secrets to provider snowflake credentials, you must specify cloud_provider = "aws" or cloud_provider = "AWS" as well as the proper region name, access key id and secret access key

## fetch_credentials_from_secrets(secret_name)
This function takes an AWS secret name as an input and returns the credentials in a dictionary format where they can be queried either for use in create_snowpark_session() or for other uses

## create_snowpark_session(username, password, account, role, warehouse)
This function takes in the above five required inputs and returns a "session" variable which can be used for snowpark operations.  if you were to, for example, have a "parent" and "child" snowflake account and needed sessions for both, you could run the following:
```
parent_session = create_snowpark_session('username', 'password', 'account', 'role', 'warehouse')
child_session = create_snowpark_session('username', 'password', 'account', 'role', 'warehouse')
```
now it's simple to differentiate execution between the two accounts

## aws_create_snowpark_session(secret_name, role, warehouse)
This function is a version of create_snowpark_session() made explicitely for use with AWS Secrets.  simply feed it the appropriate secret name and as long as the username is filed under the key name "username" password under "password" and account under "account" it will return a session.  If you don't have this naming schema and still want to use secrets, it's simple to modify this function or fetch the credentials using fetch_credentials_from_secrets and parse the dictionary yourself

## execute_sql(session, command)
An annoyance I've had with snowpark in terms of ease of use and code readability is that defining code and executing code are two distinctly different operations.  you can always define a piece of SQL code for operation using session.sql("sql code") but executing requires a .collect() at the end of this line.  This command takes in the given session and desired command and executes it all at once.  if you wish to do anything with .to_pandas() you will still need to define that manualy but this works great for anything else and the function returns the .collect() so you could also run the function within a pd.Dataframe()

## execute_sql_pandas(session, command)
created command that given a SELECT command returns a pandas dataframe

## snowflake2snowflakevalidation(session_source, session_target, database)
In the case of database migration, it can be time consuming to ensure all tables were successfully migrated. this function takes in a source and target snowpark session along with the database in question and returns a dataframe of all tables from the source and the associated row count between the two tables.

            

Raw data

            {
    "_id": null,
    "home_page": "https://teddycaulton.xyz",
    "name": "snowpark-utilities",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Snowpark,Snowflake,Data Science",
    "author": "Theodore Caulton",
    "author_email": "teddycaulton@live.com",
    "download_url": "https://files.pythonhosted.org/packages/f4/57/ff173a86c81bf37d512e17e4a8efd21ff5bcbfb58362a9e36e7dc0247008/snowpark_utilities-0.1.2.tar.gz",
    "platform": null,
    "description": "# Snowpark-Utilities\n## Description\nSnowpark Utilities is a set of Python tools aimed at easing much of the repetitive work around using Snowflake's Snowpark API.  these tools aim make it easier to stand up new snowpark sessions or execute sql commands especialy in environments where multiple sessions are needed.  The module contains functionality for users who want to directly feed credentails for authentication or those who are working with a tool like AWS secrets where credentils might be stored.  The aim of this project is to make it faster and cleaner to stand up new snowpark projects without copy and pasting code from similar endeavors or combing through documentation.\n\n## __init__\nif utilizing aws secrets to provider snowflake credentials, you must specify cloud_provider = \"aws\" or cloud_provider = \"AWS\" as well as the proper region name, access key id and secret access key\n\n## fetch_credentials_from_secrets(secret_name)\nThis function takes an AWS secret name as an input and returns the credentials in a dictionary format where they can be queried either for use in create_snowpark_session() or for other uses\n\n## create_snowpark_session(username, password, account, role, warehouse)\nThis function takes in the above five required inputs and returns a \"session\" variable which can be used for snowpark operations.  if you were to, for example, have a \"parent\" and \"child\" snowflake account and needed sessions for both, you could run the following:\n```\nparent_session = create_snowpark_session('username', 'password', 'account', 'role', 'warehouse')\nchild_session = create_snowpark_session('username', 'password', 'account', 'role', 'warehouse')\n```\nnow it's simple to differentiate execution between the two accounts\n\n## aws_create_snowpark_session(secret_name, role, warehouse)\nThis function is a version of create_snowpark_session() made explicitely for use with AWS Secrets.  simply feed it the appropriate secret name and as long as the username is filed under the key name \"username\" password under \"password\" and account under \"account\" it will return a session.  If you don't have this naming schema and still want to use secrets, it's simple to modify this function or fetch the credentials using fetch_credentials_from_secrets and parse the dictionary yourself\n\n## execute_sql(session, command)\nAn annoyance I've had with snowpark in terms of ease of use and code readability is that defining code and executing code are two distinctly different operations.  you can always define a piece of SQL code for operation using session.sql(\"sql code\") but executing requires a .collect() at the end of this line.  This command takes in the given session and desired command and executes it all at once.  if you wish to do anything with .to_pandas() you will still need to define that manualy but this works great for anything else and the function returns the .collect() so you could also run the function within a pd.Dataframe()\n\n## execute_sql_pandas(session, command)\ncreated command that given a SELECT command returns a pandas dataframe\n\n## snowflake2snowflakevalidation(session_source, session_target, database)\nIn the case of database migration, it can be time consuming to ensure all tables were successfully migrated. this function takes in a source and target snowpark session along with the database in question and returns a dataframe of all tables from the source and the associated row count between the two tables.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A helpful package for making snowpark code easier to write and more legible",
    "version": "0.1.2",
    "project_urls": {
        "Download": "https://github.com/teddycaulton/Snowpark-Utilities/archive/refs/tags/v_012.tar.gz",
        "Homepage": "https://teddycaulton.xyz"
    },
    "split_keywords": [
        "snowpark",
        "snowflake",
        "data science"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f457ff173a86c81bf37d512e17e4a8efd21ff5bcbfb58362a9e36e7dc0247008",
                "md5": "7c78376af63b606b1d3cc8ad25fc06e9",
                "sha256": "6fb82d04d9cebc0461e47b6e7fc4e0eaab3f08c6a5bd9ba6fa05aca96453ed8e"
            },
            "downloads": -1,
            "filename": "snowpark_utilities-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "7c78376af63b606b1d3cc8ad25fc06e9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 5510,
            "upload_time": "2024-03-11T21:46:14",
            "upload_time_iso_8601": "2024-03-11T21:46:14.085474Z",
            "url": "https://files.pythonhosted.org/packages/f4/57/ff173a86c81bf37d512e17e4a8efd21ff5bcbfb58362a9e36e7dc0247008/snowpark_utilities-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-11 21:46:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "teddycaulton",
    "github_project": "Snowpark-Utilities",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "snowpark-utilities"
}
        
Elapsed time: 0.21004s