SMA Execution Guide
The SMA-Checkpoints feature is an extensive workflow, so this section was created in order to be a walkthrough for feature usage.
Last updated
The SMA-Checkpoints feature is an extensive workflow, so this section was created in order to be a walkthrough for feature usage.
Last updated
The SMA-Checkpoints feature requires a PySpark workload as its entry point, since it depends on detecting the use of PySpark DataFrames. This walkthrough will guide you through the feature using a single Python script, providing a straightforward example of how checkpoints are generated and utilized within a typical PySpark workflow.
Input workload
Sample.py file content
If the SMA-Checkpoints feature is enabled, a checkpoints.json
file will be generated. If the feature is disabled, this file will not be created in either the input or output folders. Regardless of whether the feature is enabled, the following inventory files will always be generated: DataFramesInventory.csv
and CheckpointsInventory.csv
. These files provide metadata essential for analysis and debugging.
Note: This user guide used the default conversion settings.
Once the migration process is complete, the SMA-Checkpoints feature should have created two new inventory files and added a checkpoints.json
file to both the input and output folders.
checkpoints.json file content
checkpoints.json file content
Once the SMA execution flow is complete and both the input and output folders contain their respective checkpoints.json
files, you are ready to begin the Snowpark-Checkpoints execution process.
To create a convert your own project please follow up the following guide: .
As part of the conversion process you can customize your conversion settings, take a look on the feature settings.
Take a look on to review the related inventories.