# snowpark-checkpoints-collectors
---
##### This package is on Public Preview.
---
**snowpark-checkpoints-collector** package offers a function for extracting information from PySpark dataframes. We can then use that data to validate against the converted Snowpark dataframes to ensure that behavioral equivalence has been achieved.
## Features
- Schema inference collected data mode (Schema): This is the default mode, which leverages Pandera schema inference to obtain the metadata and checks that will be evaluated for the specified dataframe. This mode also collects custom data from columns of the DataFrame based on the PySpark type.
- DataFrame collected data mode (DataFrame): This mode collects the data of the PySpark dataframe. In this case, the mechanism saves all data of the given dataframe in parquet format. Using the default user Snowflake connection, it tries to upload the parquet files into the Snowflake temporal stage and create a table based on the information in the stage. The name of the file and the table is the same as the checkpoint.
## Functionalities
### Collect DataFrame Checkpoint
```python
from pyspark.sql import DataFrame as SparkDataFrame
from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
from typing import Optional
# Signature of the function
def collect_dataframe_checkpoint(
df: SparkDataFrame,
checkpoint_name: str,
sample: Optional[float] = None,
mode: Optional[CheckpointMode] = None,
output_path: Optional[str] = None,
) -> None:
...
```
- `df`: The input Spark dataframe to collect.
- `checkpoint_name`: Name of the checkpoint schema file or dataframe.
- `sample`: Fraction of DataFrame to sample for schema inference, defaults to 1.0.
- `mode`: The mode to execution the collection (Schema or Dataframe), defaults to CheckpointMode.Schema.
- `output_path`: The output path to save the checkpoint, defaults to current working directory.
## Usage Example
### Schema mode
```python
from pyspark.sql import SparkSession
from snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint
from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
spark_session = SparkSession.builder.getOrCreate()
sample_size = 1.0
pyspark_df = spark_session.createDataFrame(
[("apple", 21), ("lemon", 34), ("banana", 50)], schema="fruit string, age integer"
)
collect_dataframe_checkpoint(
pyspark_df,
checkpoint_name="collect_checkpoint_mode_1",
sample=sample_size,
mode=CheckpointMode.SCHEMA,
)
```
### Dataframe mode
```python
from pyspark.sql import SparkSession
from snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint
from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
from pyspark.sql.types import StructType, StructField, ByteType, StringType, IntegerType
spark_schema = StructType(
[
StructField("BYTE", ByteType(), True),
StructField("STRING", StringType(), True),
StructField("INTEGER", IntegerType(), True)
]
)
data = [(1, "apple", 21), (2, "lemon", 34), (3, "banana", 50)]
spark_session = SparkSession.builder.getOrCreate()
pyspark_df = spark_session.createDataFrame(data, schema=spark_schema).orderBy(
"INTEGER"
)
collect_dataframe_checkpoint(
pyspark_df,
checkpoint_name="collect_checkpoint_mode_2",
mode=CheckpointMode.DATAFRAME,
)
```
------
Raw data
{
"_id": null,
"home_page": null,
"name": "snowpark-checkpoints-collectors",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.12,>=3.9",
"maintainer_email": null,
"keywords": "Snowflake, Snowpark, analytics, cloud, database, db",
"author": null,
"author_email": "\"Snowflake, Inc.\" <snowflake-python-libraries-dl@snowflake.com>",
"download_url": "https://files.pythonhosted.org/packages/05/ef/7c4653657d45b4192bed71679afe7877c2434226602b2588654d13772c47/snowpark_checkpoints_collectors-0.1.3.tar.gz",
"platform": null,
"description": "# snowpark-checkpoints-collectors\n\n\n---\n##### This package is on Public Preview.\n---\n\n**snowpark-checkpoints-collector** package offers a function for extracting information from PySpark dataframes. We can then use that data to validate against the converted Snowpark dataframes to ensure that behavioral equivalence has been achieved.\n## Features\n\n- Schema inference collected data mode (Schema): This is the default mode, which leverages Pandera schema inference to obtain the metadata and checks that will be evaluated for the specified dataframe. This mode also collects custom data from columns of the DataFrame based on the PySpark type.\n- DataFrame collected data mode (DataFrame): This mode collects the data of the PySpark dataframe. In this case, the mechanism saves all data of the given dataframe in parquet format. Using the default user Snowflake connection, it tries to upload the parquet files into the Snowflake temporal stage and create a table based on the information in the stage. The name of the file and the table is the same as the checkpoint.\n\n\n\n## Functionalities\n\n### Collect DataFrame Checkpoint\n\n\n\n```python\nfrom pyspark.sql import DataFrame as SparkDataFrame\nfrom snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode\nfrom typing import Optional\n\n# Signature of the function\ndef collect_dataframe_checkpoint(\n df: SparkDataFrame,\n checkpoint_name: str,\n sample: Optional[float] = None,\n mode: Optional[CheckpointMode] = None,\n output_path: Optional[str] = None,\n) -> None:\n ...\n```\n\n- `df`: The input Spark dataframe to collect.\n- `checkpoint_name`: Name of the checkpoint schema file or dataframe.\n- `sample`: Fraction of DataFrame to sample for schema inference, defaults to 1.0.\n- `mode`: The mode to execution the collection (Schema or Dataframe), defaults to CheckpointMode.Schema.\n- `output_path`: The output path to save the checkpoint, defaults to current working directory.\n\n\n## Usage Example\n\n### Schema mode\n\n```python\nfrom pyspark.sql import SparkSession\nfrom snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint\nfrom snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode\n\nspark_session = SparkSession.builder.getOrCreate()\nsample_size = 1.0\n\npyspark_df = spark_session.createDataFrame(\n [(\"apple\", 21), (\"lemon\", 34), (\"banana\", 50)], schema=\"fruit string, age integer\"\n)\n\ncollect_dataframe_checkpoint(\n pyspark_df,\n checkpoint_name=\"collect_checkpoint_mode_1\",\n sample=sample_size,\n mode=CheckpointMode.SCHEMA,\n)\n```\n\n\n### Dataframe mode\n\n```python\nfrom pyspark.sql import SparkSession\nfrom snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint\nfrom snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode\nfrom pyspark.sql.types import StructType, StructField, ByteType, StringType, IntegerType \n\nspark_schema = StructType(\n [\n StructField(\"BYTE\", ByteType(), True),\n StructField(\"STRING\", StringType(), True),\n StructField(\"INTEGER\", IntegerType(), True)\n ]\n)\n\ndata = [(1, \"apple\", 21), (2, \"lemon\", 34), (3, \"banana\", 50)]\n\nspark_session = SparkSession.builder.getOrCreate()\npyspark_df = spark_session.createDataFrame(data, schema=spark_schema).orderBy(\n \"INTEGER\"\n)\n\ncollect_dataframe_checkpoint(\n pyspark_df,\n checkpoint_name=\"collect_checkpoint_mode_2\",\n mode=CheckpointMode.DATAFRAME,\n)\n```\n\n------\n",
"bugtrack_url": null,
"license": "Apache License, Version 2.0",
"summary": "Snowpark column and table statistics collection",
"version": "0.1.3",
"project_urls": {
"Bug Tracker": "https://github.com/snowflakedb/snowpark-checkpoints/issues",
"Source code": "https://github.com/snowflakedb/snowpark-checkpoints/"
},
"split_keywords": [
"snowflake",
" snowpark",
" analytics",
" cloud",
" database",
" db"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2d258f79e1917050703b20afa2d0f1c5e7ce8c3b030f75ba553118f93d668abb",
"md5": "8f43a6cd0c18f544ecf142ee91f10b02",
"sha256": "d8097e18fedbd2dbf251ab28e8eb039f30d1b6abf966045c376d3d49098f1249"
},
"downloads": -1,
"filename": "snowpark_checkpoints_collectors-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8f43a6cd0c18f544ecf142ee91f10b02",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.12,>=3.9",
"size": 58070,
"upload_time": "2025-02-07T20:19:29",
"upload_time_iso_8601": "2025-02-07T20:19:29.337090Z",
"url": "https://files.pythonhosted.org/packages/2d/25/8f79e1917050703b20afa2d0f1c5e7ce8c3b030f75ba553118f93d668abb/snowpark_checkpoints_collectors-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "05ef7c4653657d45b4192bed71679afe7877c2434226602b2588654d13772c47",
"md5": "9d1399870d72c45faa5e7d87c6fabc43",
"sha256": "5f1333d364eea511362dabbbbbfa39a492b30774c543750b37702fcfd4371156"
},
"downloads": -1,
"filename": "snowpark_checkpoints_collectors-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "9d1399870d72c45faa5e7d87c6fabc43",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.12,>=3.9",
"size": 46575,
"upload_time": "2025-02-07T20:19:40",
"upload_time_iso_8601": "2025-02-07T20:19:40.073639Z",
"url": "https://files.pythonhosted.org/packages/05/ef/7c4653657d45b4192bed71679afe7877c2434226602b2588654d13772c47/snowpark_checkpoints_collectors-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-07 20:19:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "snowflakedb",
"github_project": "snowpark-checkpoints",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "snowpark-checkpoints-collectors"
}