Release Notes
Updates on each released version of Snowflake SnowConvert for Spark
Scala 0.2.13
SparkSnowConvert Core 1.1.27
New Features
- Jupyter notebooks (.ipynb) processing
- EWI generation when a dependency couldn't be added to the project config file
Improvements
- Lambda scopes opening and closing
Bug Fixes
- Bug 680497: The remaning to full qualified for functions is not working fine
- Bug 681704: Unable to generate final report
Scala 0.2.4
SparkSnowConvert Core 1.1.8.0
Hotfix
- API endpoints update
Scala 0.2.4
SparkSnowConvert Core 1.1.8.0
Added
- .NET Core 6 Upgrade
- ElementPackage column added to imports inventory
- Sizing table added to assessment reports
- Add conversion percentage in the reports synced with BDS
- Add issues.csv file in the output
- Generate SummaryReport.html and DetailedReport.html (mirror docx html) locally on Reports folder
- Add ConversionStatus keywords to GenericScanner
- Support full name conversion
Improvements
- org.apache.spark.mllib mappings added to the core reference table
- [UI] Fix wording when cancelling the execution
- [UI] Change UI phase titles
- Group issues by EWI code
- Update TOOL_VERSION column value format on Execution info table
- Simplified the Issue summary table so it is not too big
Bug Fixes
- Resolved Issue with backslash
- Resolved BreakLine Issue
- Resolved Lambda blocks corner case
- Remove AssessmentReport.html generation (local html report)
Scala 0.1.493
SparkSnowConvert Core 1.0.117.0
Added
- Uploading packages inventory to cloud telemetry
Improvements
- Detailed report
- Minor visual improvements
- Sorting issue table by:
- Instances
- Code
- Description
Scala 0.1.492
SparkSnowConvert Core 1.0.105.0
Added
- Added a margin of error description in the detailed report
Improvements
- Improved sorting of issues table in the detailed report
- Improved display of percentages in the detailed report
Bug Fixes
- <#> character is showing issues
- Compose is not recognized as a keyword
- Parser is not working on 'join' argument
- Scala code processor throwing critical error
Scala 0.1.487
SparkSnowConvert Core 1.0.88
Improvements
- Customer information added to the detailed assessment report
- Transformation logging messages
Bug fixes
- An issue with expressions like (a, b) =>val c
- compose not being recognized as a keyword
Scala 0.1.484
SparkSnowConvert Core 1.0.77
Added
- Snowpark mappings update to 1.6.2 version
- Functions without parentheses collection improvements on assessment
- Maven project (pom.xml) file processing
- ClassName column renamed to 'alias' on SparkUsagesInventory.pam and ImportUsagesInventory.pam
- Added margin of error to the readiness score
Fixed
- Snowpark Python and Scala posted version update
- Issue with a new line after the name of functions
Scala 0.1.478
SparkSnowConvert Core 1.0.60
Added
- Basic companion object support
- org.apache.spark.sql.Column mappings update
- org.apache.spark.sql.Expression mappings update
- org.apache.spark.sql.functions mappings update
- Reference extensions dependency from project config file (SBT)
- Reference extensions dependency from project config file (Gradle)
Fixed
- "Script" code is not supported
Scala 0.1.472
SparkSnowConvert Core 1.0.44
Added
- Spark mappings update
- Trim "FileId" column value on all .pam files
- ConversionStatus and scala_spark_mappings_core.csv unification
Scala 0.1.472
SparkSnowConvert Core 1.0.37
Added
- SparkSession, DataFrameReader, and DataFrameWriter mappings update
- EWI Generation for unary and binary expressions
Fixed
- Writer replacer supports csv, parquet, json, and options
- Reader replacer is not supporting functions without parentheses
- Writer replacer is not supporting functions without parentheses
- Currently, the transformation of InsertInto is not a valid code.
- Writer replacer is not including all functions.
Scala 0.1.468
SparkSnowConvert Core 1.0.23
Added:
- Symbol resolution for function calls without parentheses
- Scopes opening/closing exceptions handling(at Replacers)
- EWI generation for not supported imports (complex cases)
- EWI generation for not defined imports
- SparkSession transformation improvements
- DataFrame reader/writer transformation improvements
- "Spark Usages by Support Category", "Scala Import Call Summary" sections added to Detailed report
- RDD mappings update
Fixed:
- Stack Overflow, output files were not generated
- Expression without parentheses on Spark Session replacer transformation
Scala 0.1.458
SparkSnowConvert Core 0.1.530
Added:
- Updated helper/extension .jar to latest version
- Updated assessment .docx report template
- Import usages inventory generation
- Generating EWIs for not supported imports (simple case)
Fixed:
- Indeterminism issue on SymblTable
- Error when sorting spark usages inventory files
- SclSingleExprPath must not contain null members
- The collection was modified; the enumeration operation may not execute
- Parsing does not finish when there are multiple closing multi-line in a row
- Issue with expression
- Error FileNotGenerated
Scala 0.1.442
SparkSnowConvert Core 0.1.499
Fixed:
- The setting button is not refreshing when the license is changed.
Scala 0.1.442
SparkSnowConvert Core 0.1.498
Added:
- Symbol table built ins loading improvements
- Adding robustness to symbol table loaders
Fixed:
- Error in the total of Scala files in the AssessmentReport
- Symbol resolving for generic functions using an asterisk
- Comments inside comments and id prefix and interpolation parsing error
- The comma after identifier parsing error
- Parsing error of the expression when the first statement is taking the pattern of the second statement
- "and", "::","++" and "or" operators parsing errors
Scala 0.1.430
SparkSnowConvert Core 0.1.491.0
Added
- Symbol loading/resolving - Add support for generic methods with asterisk params.
- Symbol loading/resolving - Add type inference for type defs.
- Symbol loading/resolving general improvements
Fixed
Issue related to the import usages not being stored if there are no Spark references.
Scala 0.1.427
SparkSnowConvert Core 0.1.486.0
Added
- Cloud telemetry and sending email mechanism now available in Conversion Mode
- Update contact information in the email template
Scala 0.1.426
SparkSnowConvert Core 0.1.476.0
Added
- 'SnowConvert Version' and 'Snowpark version' columns to SparkUsagesInventory
- Improvements to speed analysis
Scala 0.1.422
SparkSnowConvert Core 0.1.454.0
Added
- Automated and Status columns added to SparkReferenceInventory.csv
- Summary and detailed html report uploading to Snowflake
- Mappings update
Fixed:
- Summary and detailed report wordings fixes
- Email template wording fixes.
Scala 0.1.421 Spark
SnowConvert Core 0.1.414
Added - Email template update
Adding "Version information" section to Summary report
Adding "Resources" section to Detailed report
Final screen UI changes
Fixed:
Report missing spark functions on sparkUsagesInventory.pam
Detailed report logos update
Percentage values precision on summary and detailed assessment reports
Scala 0.1.421
SparkSnowConvert Core 0.1.396
Added Spark read and write transformations improvement
Adding session id column to spark usages inventory
Scala 0.1.411
SparkSnowConvert Core 0.1.279
Added
- Spark read and write transformations
- Spark trim, rtrim and ltrim function transformations
- String interpolation parsing
- Increasing sql extraction match patterns
Scala 0.1.402
SparkSnowConvert Core 0.1.274
Added
- File operations robustness
- Output folders reorganization
- SparkSession builder transformation
- Adding "Scala files with embedded sql" count in assessment reports
Fixed
- Cyclic dependencies issue on Symbol Table
- Empty case clause parsing
- Multiple statements on lambda block parsing
- Case clause pattern parsing
Scala 0.1.391
SparkSnowConvert Core 0.1.229
Added
- Parsing robustness
- .sbt configuration files processing
- Issues breakdown section added in assessment html report
- Look and feel improvements in assessment html report
- Using RapidScanner inventories to calculate the spark usages assessment
- macOS CLI & UI support
- Improvements in import statements mappings
Scala 0.1.380
Added
- Scala Parser
- Double exclamation mark support
- Conversion tool
- Sql extraction
- object_struct function transformation
- avg function transformation
- Snowpark extensions .jar update
- Lines of code report
- Import mappings
- Docx and html assessment reports
- RapidScan integration
- Linux OS support
Fixed
- Binary expressions special cases parsing
Scala 0.1.358
Added
- Scala Parser
- Support underscore followed by newline when parsing expressions
- Improve parsing errors handling
- Symbols
- Improve support of Unresolved Symbols
- Improve creation of Generic Symbols to reuse existing ones
- Support Loading and Resolution of Lambda Expressions
- Mappings:
- Support custom mappings for functions and types via .map files
- Added custom map directory parameter
Fixed
- Fill missing columns at notification .pam file.
- Generate metrics data files (.pam) to specified reports folder
Added
- Updated logos and text in UI and Documentation
- Symbols
- Support Generic Identifiers on Type Parameters for Generic Symbol
- Exclusion of not required dependencies
- ScalaParser:
- Backticks idents
- ArgAssign expressions
Fixed
- ScalaParser:
- ExprLambda with ColonType next to ident
- Try expression when try is not referring a keyword
- Empty lambda expr with args
- Underscore ("_") in TypeArgs
- Files with all commented out source
- New lines at SimpleExpr, SingleExpr, TailExpr nodes
- ConversionTool:
- Fix Crash of conversion due to javap parsing errors (related with jar dependencies)
Features
- Command line interface.
- Scala code assessment feature.
- Consume multiple files or single files with multiple objects.
- Conversion of basic Scala programs as defined by functions and syntax to be mutually agreed during the first 3 development sprints.
- Comments in Scala code are re-inserted inline.
- Insert comments in-line with any errors/warning/reviews.
- Basic reporting including
- Number of spark elements processed
- Summary of elements transformed, files and locations of
- Summary of errors/warnings/reviews encountered.
- Summary of unsupported Spark APIs
- Demonstrated inclusion of the following defined scenarios:
- API mappings
- Recreate project as SnowPark projects
- Setup Proper project structure
- Update to SnowPark supported Scala version
- Helper Creation to reduce impedance mismatch
- Define some pattern rewrite
- Document guidelines for non-automatable concepts (e.g.: file usage patterns, data source configuration, or spark libraries without a direct equivalent, like Kafka stream reading)
- Greater than 90% successful conversion rate for initial two customer code bases (basis code for the above scenarios) to be provided to Mobilize by Snowflake on the Effective Date.
- Measured based upon number of compilable objects in Snowflake
- Objects with unsupported/untranslatable functions not counted
- Conversion rate for code will be based upon a complete code base containing all dependent objects.
- Snowflake will provide access to all available private preview features for Mobilize development benefit
Last modified 1mo ago