In our last post, we talked about What Oracle RAC Scan Listener is and today let’s see something about exporting and migrating data from AWS S3.
Recently, I have been working on deploying a Python system, using many of the free 12-month packages offered by AWS, with S3 being one of them. The S3 service is not very user-friendly; I wanted to export static files from the previous S3 server, but it’s impossible to do so from the operations page. The provided copy function is also not very effective, making it quite a hassle.
After a brief investigation, I found two methods:
Method 1. Export files from AWS S3 to AWS CloudShell
Execute the following command in AWS CloudShell:
aws s3 cp s3://bucket_name/ . --recursive --region region_name (e.g., eu-central-1)
This command allows you to download files from the S3 server associated with the current AWS account to CloudShell, after which you can manually package and download them.
Once the files have been downloaded to AWS CloudShell, you can upload them to the corresponding target server.
Method 2. Directly Migrate files from AWS S3 to target server
Modify the permissions of the target server. In the S3 server’s Permissions tab under Bucket Policy, configure it as follows:
{ "Version": "2023-12-27", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::bucket_name/*" } ] }
After modifying the server permissions mentioned above, execute the following command in AWS CloudShell:
aws s3 cp s3://source_bucket_name/ s3://destination_bucket_name/ --recursive
Note: After the file migration is complete, change the read and write permissions of the destination server back to read-only.
{ "Version": "2023-12-27", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::bucket_name/*" } ] }