Python

Every Error, Warning, and Issue (EWI) in PySpark

Issues Codes

All of the warnings, parsing errors, and conversion exceptions caused generated by the SMA when taking in Python as a source will appear below. If you have any concerns or see something that's not right, please reach out to the SMA support team at sma-support@snowflake.com.

CodeDescriptionCategoryDeprecated since

Not supported spark version

Warning

-

File with parsing errors

Parsing error

-

Element is not supported

Conversion error

-

An error occurred when loading the symbol table

Conversion error

-

The symbol table could not be loaded

Parsing error

-

pyspark.conf.SparkConf is not required

Warning

-

pyspark.context.SparkContext is not required

Warning

-

pyspark.sql.context.SQLContext is not required

Warning

-

pyspark.sql.context.HiveContext is not required

Warning

-

pyspark.sql.dataframe.DataFrame.approxQuantile has a workaround

Warning

-

pyspark.sql.dataframe.DataFrame.checkpoint has a workaround

Warning

-

pyspark.sql.dataframe.DataFrameStatFunctions.approxQuantile has a workaround

Warning

-

pyspark.sql.dataframe.DataFrameStatFunctions.writeTo has a workaround

Warning

-

pyspark.sql.functions.acosh has a workaround

Warning

-

pyspark.sql.functions.asinh has a workaround

Warning

-

pyspark.sql.functions.atanh has a workaround

Warning

-

pyspark.sql.functions.collect_set has a workaround

Warning

-

pyspark.sql.functions.date_add has a workaround

Warning

-

pyspark.sql.functions.date_sub has a workaround

Warning

-

pyspark.sql.functions.datediff has a workaround

Warning

-

pyspark.sql.functions.instr has a workaround

Warning

-

pyspark.sql.functions.last has a workaround

Warning

-

pyspark.sql.functions.log10 has a workaround

Warning

-

pyspark.sql.functions.log1p has a workaround

Warning

-

pyspark.sql.functions.log2 has a workaround

Warning

-

pyspark.sql.functions.ntile has a workaround

Warning

-

pyspark.sql.readwriter.DataFrameReader.csv has a workaround

Warning

-

pyspark.sql.readwriter.DataFrameReader.json has a workaround

Warning

-

pyspark.sql.readwriter.DataFrameReader.orc has a workaround

Warning

-

pyspark.sql.readwriter.DataFrameReader.parquet has a workaround

Warning

-

pyspark.sql.session.SparkSession.Builder.appName has a workaround

Warning

-

pyspark.sql.column.Column.contains

Warning

-

Element is not defined

Conversion error

-

pyspark.sql.functions.asc has a workaround

Warning

-

pyspark.sql.functions.desc has a workaround

Warning

-

pyspark.sql.functions.reverse has a workaround

Warning

-

pyspark.sql.column.Column.getField has a workaround

Warning

-

pyspark.sql.functions.sort_array has a workaround

Warning

-

Element is not recognized

Conversion error

-

pyspark.sql.column.Column.getItem has a workaround

Warning

2.3.0

pyspark.sql.functions.explode has workaround

Warning

-

pyspark.sql.functions.explode_outer has workaround

Warning

-

pyspark.sql.functions.posexplode has workaround

Warning

-

pyspark.sql.functions.posexplode_outer has workaround

Warning

-

pyspark.sql.functions.split has workaround

Warning

-

pyspark.sql.functions.map_values has workaround

Warning

-

pyspark.sql.functions.monotonically_increasing_id has workaround

Warning

-

pyspark.context.SparkContext.setLogLevel has workaround

Warning

-

pyspark.sql.session.SparkSession.conf has workaround

Warning

-

pyspark.sql.session.SparkSession.sparkContext

Warning

-

pyspark.conf.SparkConf.set has workaround

Warning

-

pyspark.sql.session.SparkSession.Builder.master has a workaround

Warning

2.4.0

pyspark.sql.session.SparkSession.Builder.enableHiveSupport has workaround

Warning

-

An error occurred when extracting the dbc files

Warning

-

Spark element with a given argument is not supported

Warning

-

Spark element with a given key - value argument is not supported

Warning

-

Option argument contains a value that is not a literal, therefore cannot be evaluated

Warning

-

to_pandas contains columns of type ArrayType that is not supported and has a workaround.

Warning

-

pyspark.rdd.RDD.getNumPartitions is not required

Warning

-

pyspark.storagelevel.StorageLevel is not required

Warning

-

pyspark.sql.functions.udf without parameters or return type parameter are not supported

Warning

File has mixed indentation (spaces and tabs).

Parsing error

Last updated