Teradata
Numeric Data Operations
Calculation Precision
Calculations in Teradata round after every step based upon the data types in the operation. For decimal types, this will keep the larger precision used in the calculation. NUMBER data types will keep full precision. Since Snowflake stores everything as a number, it keeps full precision throughout the calculation which can result in different outcomes for any number type (decimal, integer, float...) other than NUMBER. This behavior is not modified through code conversion as it is typically not an intended desired result by the programmer.
Teradata: SELECT (1.00/28) * 15.00 = 0.60
Snowflake: SELECT (1.00/28) * 15.00 = 0.535710 = 0.54
Integer-Integer Division
Teradata performs a truncation or floor when dividing two integer values in, whereas Snowflake will ultimately perform a round operation. This scenario is accounted for in the automated code conversion by adding a TRUNC statement wherever this happens.
Teradata: SELECT (5/3) = 1
Snowflake: SELECT (5/3) = 1.6666666 = 2
Converted Snowflake: SELECT TRUNC(5/3) = 1
Banker Rounding
Teradata has the option to use Banker rounding by setting the parameter ROUNDHALFWAYMAGUP while Snowflake only uses normal rounding.
CAST( 1.05 AS DECIMAL(9,1))
1.0
1.1
CAST( 1.15 AS DECIMAL(9,1))
1.2
1.2
CAST( 1.25 AS DECIMAL(9,1))
1.2
1.3
CAST( 1.35 AS DECIMAL(9,1))
1.4
1.4
CAST( 1.45 AS DECIMAL(9,1))
1.4
1.5
CAST( 1.55 AS DECIMAL(9,1))
1.6
1.6
CAST( 1.65 AS DECIMAL(9,1))
1.6
1.7
CAST( 1.75 AS DECIMAL(9,1))
1.8
1.8
CAST( 1.85 AS DECIMAL(9,1))
1.8
1.9
CAST( 1.95 AS DECIMAL(9,1))
2.0
2.0
Decimal to Integer Conversion
Teradata truncates the decimal values while Snowflake rounds to the nearest integer. This is accounted for in the conversion by inserting a TRUNC statement.
CAST( 1.0 AS INTEGER)
1
1
CAST( 1.1 AS INTEGER)
1
1
CAST( 1.2 AS INTEGER)
1
1
CAST( 1.3 AS INTEGER)
1
1
CAST( 1.4 AS INTEGER)
1
1
CAST( 1.5 AS INTEGER)
1
2
CAST( 1.6 AS INTEGER)
1
2
CAST( 1.7 AS INTEGER)
1
2
CAST( 1.8 AS INTEGER)
1
2
CAST( 1.9 AS INTEGER)
1
2
Number without Precision/Scale
Number when defined without a scale/precision in Teradata allows for a flexible scale value for any given record from 0 to 38 provided the total precision never exceeds 38. Snowflake does not allow this and always has a fixed scale/precision. An example of numbers defined in a table this way:
In this table, the example of 2 values below that would not fit into a single Snowflake column but could be found in any of the columns shown in the table above in Teradata:
Value 1: 123,345,678,901,234,567,891,012.0123456789
Value 2: 123.12345678901234567890
These two values would require a fixed precision/scale of NUMBER(42, 20) which exceeds the maximum available in Snowflake precision of 38. Snowflake is currently developing the functionality for flexible precision/scale.
Truncation on INSERT for SQL DML Statements
Teradata will auto-truncate a string value upon insert if the string is too large to fit into the specified field. SnowConvert will convert fields on a like-to-like basis (ex: VARCHAR(20) -> VARCHAR(20)). If an ingestion process is relying on auto-truncation, that process will need to be adjusted manually to truncate the data using a LEFT() function. SnowConvert does not automatically add this as there are multiple implications to doing this across an entire code base.
Float Default Issue Example:
Float Data Aggregation
Float data types are by definition approximations and as such different databases may aggregate differently due to how these approximations are handled at different points in the internal calculations of the database.
Other Considerations
Join Elimination
Snowflake does not currently eliminate un-required joins in any SQL that is run. It executes the SQL assuming that all objects included will potentially affect the result sets. Teradata has built-in join elimination features by leveraging primary-foreign key relationships that are defined in the DDL. The primary reason for this is to help avoid improperly written queries and is rarely an issue where code has not been written to intentionally take advantage of this feature. If a design decision was taken in views or DML to purposefully exploit this feature, code conversion cannot address this and re-architecting some of that solution design may be required.
Using max() over (order by) and other non-rank-related window functions
Teradata behavior and defaults:
Default: In the presence of an ORDER BY clause and the absence of a ROWS or ROWS BETWEEN clause, Teradata SQL window aggregate functions use ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING.
Snowflake behavior and defaults:
Default: In the presence of an ORDER BY clause and the absence of a ROWS or ROWS BETWEEN clause, Snowflake window aggregate functions use ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.
Example:
Below is a TEST_WIN table containing salary information for different employees in different departments.
SALES
10
11
5000
SALES
10
12
6000
HR
20
21
1000
HR
20
22
2000
PS
30
31
7000
PS
30
32
9000
Executing the below code in Teradata returns the maximum salary of an employee across departments.
SALES
6000
10
9000
SALES
5000
10
9000
HR
2000
20
9000
HR
1000
20
9000
PS
7000
30
9000
PS
9000
30
9000
However, executing the same code (code converted by Snowflake-SnowConvert) produces different results (highlighted values), which are correct as per the defaults of Snowflake.
SALES
5000
10
6000
SALES
6000
10
6000
HR
1000
20
6000
HR
2000
20
6000
PS
7000
30
9000
PS
9000
30
9000
In order to produce the same results as in Teradata, it requires adding the ROWS/RANGE value as per the below code.
SALES
5000
10
9000
SALES
6000
10
9000
HR
1000
20
9000
HR
2000
20
9000
PS
7000
30
9000
PS
9000
30
9000
The above addition of the RANGE/ROWS clause is to explicitly specify how the ORDER BY clause is working, similar behavior can also be achieved by omitting the order by clause altogether.
References
Snowflake: https://docs.snowflake.com/en/sql-reference/functions-analytic.html Teradata: https://docs.teradata.com/r/756LNiPSFdY~4JcCCcR5Cw/dIV_fAtkK3UeUIQ5_uucQw
Last updated