Frequently Asked Questions (FAQ)

All of the information you could ever want to know about running your Spark Python code.

Looking for additional information on using Snowpark Migration Accelerator for Python? You're not alone. Here's a collection of the most frequently asked questions on SMA for PySpark and their answers.

Using SMA with Jupyter Notebooks

Can I use Python notebook (.ipynb files) in the tool?

Yes! The notebook file will need to be in the source directory that you choose as input for the tool. They can be anywhere in the directory and you do not need to limit your run of the tool to those notebooks. You can have any python code with a .py extension and any notebook code with a .ipynb extension in your source code directory (or any subfolders in that directory).

Is there any reason to convert my notebook files to .py files?

That depends. If you want to continue to use the notebooks as notebooks, then there is no need to extract the python into .py files. If you are looking to move away from notebooks, then there is a workaround to extract the python code from the notebook. Recall though that the tool can process either, so for SMA to execute an analysis or conversion, the notebooks can stay as they are.

If you want to expose only the Python code in the notebook files, you can use the nbconvert utility. Follow these steps:

  1. Install the nbconvert utility. For Python, you can do a pip install for this utility by running pip install nbconvert. NOTE: If you are using MacOS, you will probably will need to run pip3 install nbconvert or python3 -m pip install nbconvert.

  2. Create a copy of the directory that has the notebook source code

  3. Convert all of the notebook scripts by running this command from the command line: find /path/to/folder/with/notebooks -name '*.ipynb' | xargs python -m nbconvert --to script Note: If you are using MacOS you probably need to run find /path/to/folder/with/notebooks -name '*.ipynb' | xargs python3 -m nbconvert --to script instead. This command will recursively convert the notebooks to python scripts. The output will be dropped into the given directory.

  4. Run SMA for Python on the output directory. This will process the *.py files found in the directory.

Invalid Access Code error on VDI

I'm getting an access code error using a VDI. What should I do?

If you activated an access code on a VDI and the next time you use it (usually after you shutdown the VM or disconnect from the VDI) it reports an invalid access code error:

That could be a problem with the virtualization software that is being used. SnowConvert access codes are tied to the computer’s serial number. That means that an activated access code can only be used by the same computer, and does not matter if you try to copy the files into another computer it won’t work unless the serial number is the same. Therefore the virtualization software should keep the same UUID on the virtualized machine or this error will appear.

To solve this error and activate the license again you should delete the current activation files that are located in the following folders: For Windows:

C:\Users\<Your User>\AppData\Roaming\Snowflake Inc

For macOS:

/Users/<Your User>/.config/Snowflake Inc

The virtualized software must be configured to keep the UUID or the error will happen again and you will need to activate the access code again after following the steps above.

Last updated