Tool Execution

Running SnowConvert for Spark

Once you have setup your project, you can run SnowConvert for Spark. This is the simplest part of the tool.

On the Project Creation page, you can run SnowConvert by choosing Save & Start Assessment in the bottom right.

This will begin the execution of the tool. All of the files in the input directory will be scanned. As the tool runs, you will see a screen that looks like this:

There are three phases of the execution:

  • Loading Source Code: SnowConvert for Spark will scan all of the files in the input directory. It will create the file inventory, and from this inventory, it will build the semantic model with the code found in the file extensions under consideration.

  • Analyzing Source Code: This is the bulk of the operation. SnowConvert for Spark builds an Abstract Syntax Tree (AST), which serves as a semantic model of the source codebase. This model represents the functionality present in the source code. As the AST is built, so is a symbol table. This allows the tool to represent elements and functionality present in the model as symbols to track through to the other side of the conversion process. This symbol table is queried to produce all of the output reporting the is generated by the tool. Once the AST is built, in conversion mode, the tool will take the elements of the tree that have a functional equivalent in Snowflake and pretty print that Snowflake element in the functinal diagram where the input is found.

  • Writing Results: The final step is to generate the output. In Assessment mode, this generates the reports. In conversion mode, both the reports and the output code are generated in the output folder specified in the project setup.

The View Results button will be available when all three phases have completed. This will take you to the assessment output screen.

Last updated