Bobcares

AWS DMS: Migrate data to Amazon S3 – How to do

by | Aug 2, 2021

Wondering how to use AWS DMS: Migrate data to Amazon S3? We can help you!

Here at Bobcares, we often handle requests to migrate data using AWS DMS for our customers using AWS as a part of our Server Management Services.

Today let’s see how our Support Engineers do this for our customers.

Using AWS DMS: Migrate data to Amazon S3

Here we will be migrating data in Apache Parquet (.parquet) format to Amazon Simple Storage Service.

We can migrate data to an S3 bucket in Apache Parquet format if we use replication 3.1.3 or a more recent version. The default Parquet version is Parquet 1.0.

Following are the steps that our Support Engineers use for the migration:

1. First we have to create a target Amazon SE endpoint from the AWS DMS Console.

2. Then add an extra connection attribute (ECA) using the following:

dataFormat=parquet;

We must also, check the other extra connection attributes that we can use for storing parquet objects in an S3 target.

Or, create a target Amazon S3 endpoint using the following create-endpoint command in the AWS Command Line Interface (AWS CLI):

aws dms create-endpoint --endpoint-identifier s3-target-parque --engine-name s3 --endpoint-type target --s3-settings '{"ServiceAccessRoleArn": <IAM role ARN for S3 endpoint>, "BucketName": <S3 bucket name to migrate to>, "DataFormat": "parquet"}'

3. After that we can use the following extra connection attribute to specify the Parquet version of output file:

parquetVersion=PARQUET_2_0;

4. And run the describe-endpoints command to see if the S3 endpoint that we created has the S3 setting DataFormat or the extra connection attribute dataFormat set to “parquet”.

To check the S3 setting DataFormat, we can use the following command:

aws dms describe-endpoints --filters Name=endpoint-arn,Values=<S3 target endpoint ARN> --query "Endpoints[].S3Settings.DataFormat"
[
    "parquet"
]

5. If the value of the DataFormat parameter is CSV, then we must recreate the endpoint.

6. After we get the output in Parquet format, we can parse the output file by installing the Apache Parquet command line tool:

pip install parquet-cli --user

7. Then, inspect the file format:

parq LOAD00000001.parquet 
 # Metadata 
 <pyarrow._parquet.FileMetaData object at 0x10e948aa0>
  created_by: AWS
  num_columns: 2
  num_rows: 2
  num_row_groups: 1
  format_version: 1.0
  serialized_size: 169

8.  Finally, we can print the file content:

parq LOAD00000001.parquet --head
   i        c
0  1  insert1
1  2  insert2

[Need assistance? We can help you]

Conclusion

To conclude, we saw the steps that our Support Techs follow to migrate data to Amazon S3.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.