Data Migration
Last updated
Last updated
To migrate the data of your tables, you must have your S3 bucket in the same region as your Redshift cluster. Data migration with S3 buckets in regions other than the provided Redshift clusters will be added in the future.
SnowConvert migrates the data of your Redshift tables by unloading it to PARQUET files in an S3 bucket that you must provide. After the files are created, the application will copy the data directly from those files to the tables deployed in Snowflake.
Before executing the migration of your data you need the following prerequisites:
Have an S3 bucket in AWS in the same region as your Redshift cluster.
You need to create an IAM role associated with your Redshift cluster that can unload the data of your Redshift tables into your S3 bucket. The IAM role must have the following policy configuration (the following configuration can be used by any database user, to restrict access to this role you can read this guide):
Have an IAM user that can read and delete objects of your S3 bucket, this is necessary to read the data from the files that were created in the S3 to the Snowflake tables. Here is an example of an IAM policy that can be used to load data from the S3 files into the Snowflake target tables:
If you don't provide the s3:DeleteObject and the s3:DeleteObectVersion permission to your IAM user the data migration process will not fail, but the data files that were created by the tool will not be deleted from the S3 bucket.
Be connected to the Redshift cluster and the Snowflake account where the DDL code was deployed.
Ensure that the S3 bucket path you enter does not have any files, the process will fail if there are files in the given path.
Click on Set S3 Bucket Settings to add the following information:
S3 Bucket URL (Ensure the URL you entered finishes with a “/“).
IAM Role ARN to unload data from the tables into PARQUET files in the S3 bucket URL you provided.
The access key of the IAM user that has permission to read and delete objects in the S3 bucket objects.
The secret access key of the IAM user that has permission to read and delete objects in the S3 bucket objects.
Select the tables for which you want to have their data migrated to Snowflake.
Click on Migrate Data, this will start the data migration process by unloading the data into the S3 bucket and then copying the data from those files to the target tables in Snowflake.
The data migration column will be updated indicating if each table had their data migrated successfully or not.
This page validates the number of rows moved from the source tables to the target tables.
Each row contains the following information about the migrated table: source schema and table name, target schema and table name, and the number of rows loaded.
If you want to execute another data migration process for more tables you can click on Go Back to Data Migration.