Bobcares

How to migrate data from Amazon S3 to DigitalOcean Spaces with rclone?

by | Sep 15, 2020

Migration of data is the primary task that concerns webmasters while shifting from one object storage service to another. As a part of our Server Management Services, we help our customers to migrate data from amazon s3 to DigitalOcean spaces or similar platforms regularly.

Let us now discuss the steps to perform such tasks with Rclone.
 

How to migrate data from Amazon S3 to DigitalOcean Spaces with rclone?

While we move from one object storage service to another, the primary task to perform is to migrate the data from the old space to the new one.

Rclone is a utility that helps to sync our data to cloud storage and can help us with this task.

Let us now look at the steps to migrate the data from Amazon S3 to DigitalOcean Spaces in detail.

Creating API Keys and Finding Bucket Properties

To copy the objects to spaces, we need some information on both Amazon S3 and DigitalOcean Spaces accounts. We will need a set of API keys for both services that the tool can use and we will need to know the region and location constraint values for our buckets.

Generating a DigitalOcean Spaces API Key and Finding the API Endpoint

To create a DigitalOcean Spaces API key, follow the steps given below:

  1. First, click on the API link in the main navigation of the Control Panel. The resulting page lists your DigitalOcean API tokens and Spaces access keys. Scroll down to the Spaces portion.
  2. If this is your first Space, you might not have any keys listed. Click the Generate New Key button. The New Spaces key dialog will pop up.
  3. Enter a name for the key. You can create as many keys as you like, so keep in mind that the only way to revoke access for a key is to delete it.
  4. Click the Generate Key button to complete the process. You will be returned to the API screen listing all of your keys. Note that the new key has two long tokens displayed:
  5. The first is our access key. This is not secret and will continue to be visible in the Control Panel. The second string is your secret or secret key. This will only be displayed once. Record it in a safe place for later use. The next time you visit the API page this value will be gone, and there is no way to retrieve it.
  6. Save the access key ID and the secret key so that we can configure rclone to access our account.
  7. Next, we need to find the appropriate API endpoint. If you have already created a DigitalOcean Space you wish to transfer your objects to, you can view the Space’s endpoint within the DigitalOcean Control Panel by selecting the Space and viewing the Settings tab.

DigitalOcean Spaces endpoint

If we have not created a Space yet, rclone can automatically create the one we select as part of the copying process. The endpoint in that case would be the Spaces region we wish to use followed by .digitaloceanspaces.com.

 

Generating an Amazon S3 API Key

We need an Amazon API key with permission to manage S3 assets. If this is not already present, we could generate it with the steps below:

  1. In your AWS Management Console, click on your account name and select My Security Credentials from the drop-down menu.
  2. Next, select Users in the left-hand menu and then click the Add user button:
  3. Type in a User name and select Programmatic access in the Access type section. Click the Next: Permissions button to continue.
  4. On the page that follows, select the Attach existing policies directly option at the top and then type s3read in the Policy type filter. Check the AmazonS3ReadOnlyAccess policy box and then click the Next: Review button to continue.
  5. Review the user details on the next page and then click the Create user button when ready.
  6. On the final page, you will see the credentials for your new user. Click the Show link under the Secret access key column to view the credentials.
  7. Copy the Access key ID and the Secret access key somewhere secure so that you can configure rclone to use those credentials. You can also click the Download .csv button to save the credentials to your computer.

Finding the Amazon S3 Bucket Region and Location Constraints

To find the region and location constraint values for our S3 bucket, click Services in the top menu and type S3 in the search bar that appears. Select the S3 service to go to the S3 management console.

We need to look for the region name of the bucket we wish to transfer. The region will be displayed next to the bucket name.

We need to find the region string and the matching location restraint associated with our bucket’s region. Look for your bucket’s region name in this S3 region chart from Amazon to find the appropriate region and location constraint strings. If the region name is “US East (N. Virginia)”, we can use us-east-1 as the region string and our location constraint would be blank.
 

Install rclone on the Local Computer

Now we are ready to install rclone on the local system. The zip file of the binary file to install rclone is available at its official site. A detailed writeup on performing this task on various systems is available here.
 

Configure the S3 and Spaces Accounts

In the new file that we created after Rclone installs, we can define our Amazon S3 and DigitalOcean Spaces configuration so that rclone can manage content between the two accounts.

To define S3 account, paste the following section in the configuration file ~/.config/rclone/rclone.conf

[s3]
type = s3
env_auth = false
access_key_id = aws_access_key
secret_access_key = aws_secret_key
region = aws_region
location_constraint = aws_location_constraint
acl = private

Here, we define a new rclone “remote” called s3. We set the type to s3 so that rclone knows the appropriate way to interact with and manage the remote storage resource. We will define the S3 credentials in the configuration file itself, so we set env_auth to false.

Next, we set the access_key_id and secret_access_key variables to our S3 access key and secret key, respectively. Be sure to change the values to the S3 credentials associated with your account.

We set the region and location constraint according to the properties of our S3 bucket that we found in the Amazon region chart. Finally, we set the access control policy to “private” so that assets are not public by default.

To define a similar section for our DigitalOcean Spaces configuration, paste the following section in the configuration file ~/.config/rclone/rclone.conf:

[spaces]
type = s3
env_auth = false
access_key_id = spaces_access_key
secret_access_key = spaces_secret_key
endpoint = nyc3.digitaloceanspaces.com
acl = private

In this section, we are defining a new remote called “spaces”. Again, we are setting type to s3 since Spaces offers an S3-compatible API. We turn off env_auth so that we can define the Spaces credentials within the configuration file.

Next, we set the access_key_id and secret_access_key variables to the values generated for our DigitalOcean account. We set the endpoint to the appropriate Spaces endpoint we determined earlier. Finally, we set the acl to private again to protect our assets until we want to share them.

Save and close the file when you are finished.

On macOS and Linux, be sure to lock down the permissions of the configuration file since our credentials are inside:

$ chmod 600 ~/.config/rclone/rclone.conf

On Windows, permissions are denied to non-administrative users unless explicitly granted, so we should not need to adjust access manually.
 

Copying Objects from S3 to Spaces

We can now begin to transfer files by checking the rclone configured remotes:

$ rclone listremotes

It should display both of the sections we defined earlier.

We can view the available S3 buckets by asking rclone to list the “directories” associated with the s3 remote with the command below:

$ rclone lsd s3:

Similarly we can view the spaces with the command below:

$ rclone lsd spaces:

To view the contents of an S3 bucket or DigitalOcean Space, you can use the tree command. Pass in the remote name, followed by a colon and the name of the “directory” you wish to list (the bucket or Space name):

$ rclone tree s3:source-of-files

We can copy the files from your S3 bucket to a DigitalOcean Space by:

$ rclone sync s3:source-of-files spaces:dest-of-files

When the transfer is complete, you can visually check that the objects have transferred by viewing them with the tree subcommand:

$ rclone tree spaces:dest-of-files

For more detail verification, use the check subcommand to compare the objects in both remotes:

$ rclone check s3:source-of-files spaces:dest-of-files

Output
2017/10/25 19:51:36 NOTICE: S3 bucket dest-of-files: 0 differences found
2017/10/25 19:51:36 NOTICE: S3 bucket dest-of-files: 2 hashes could not be checked

This will compare the hash values of each object in both remotes. You may receive a message indicating that some hashes could not be compared. In that case, you can rerun the command with the –size-only flag (which just compares based on file size) or the –download flag (which downloads each object from both remotes to compare locally) to verify the transfer integrity.

[Need any further assistance to migrate data from amazon s3 to DigitalOcean spaces? – We’re available 24*7]

Conclusion

To sum up, Migration of data is the primary task that concerns webmasters while shifting from one object storage service to another. Today, we saw how our Support Engineers migrate data from amazon s3 to DigitalOcean spaces.

 

 

 

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.