#EWI: SPRKPY1026 => pyspark.sql.readwriter.DataFrameReader.csv has a workaround, see documentation for more infostringmap = sparkSession.read.csv(path, schema="...", encoding="UTF-8", header=True, skipHeader=myVariable)
Schema:
The second parameter "schema" is not supported for Snowpark as a parameter, so you have to specify it; if it doesn't have an schema yet, use the schema function, as follows:
Source:
Expected:
Options:
The additional parameters are also not supported by Snowpark as parameters, but for many of them you can use the "option" function to specify those .csv parameter as options, as follows:
Source:
Expected:
The following options are not supported for Snowpark: quoteAll, inferSchema, enforceSchema, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, nullValue, nanValue, positiveInf, negativeInf, timestampNTZFormat, maxColumns, maxCharsPerColumn, mode, columnNameOfCorruptRecord, multiLine, samplingRatio, emptyValue, locale, unescapedQuoteHandling, header.
Recommendation
For more support, you can email us at [email protected]. If you have a contract for support with Snowflake, reach out to your sales engineer and they can direct your support needs.