Replicate S3 buckets into another region
Setting up S3 bucket replication is a key method to achieve global buckets, allowing your data to be available across multiple regions.
When you replicate your S3 buckets, you continue to interact with your source bucket in its specific region.
Replication in Amazon S3 does incur costs. Amazon S3 offers pricing details on their pricing page.
What we'll be doing
By following this guide you will achieve multi-region replication for all S3 buckets in your stack, enhancing data availability, durability, and disaster recovery capabilities.
- Review the existing module
- Add a destination bucket to the module
- Configure IAM to allow replication
- Set up replication to the destination bucket
Review the existing module
Start by cloning the Nitric repository, then examine how the Terrraform provider provisions an S3 bucket.
git clone https://github.com/nitrictech/nitriccd nitric
The AWS S3 module in the default Terraform provider performs the following tasks:
- Creates a unique ID for each S3 bucket to ensure unique naming.
- Provisions each bucket with a unique name using the generated ID.
- Tags buckets for identification.
- Grants S3 permission to invoke specified Lambda functions.
- Configures S3 bucket notifications to trigger Lambda functions based on specified events using dynamic blocks.
To begin our customization, we will start adding configuration to this module.
Add a destination bucket to the module
Introduce a new variable into aws/deploytf/.nitric/modules/bucket/variables.tf:
variable "replication_region" {description = "The AWS region for the replication bucket"type = stringdefault = "us-west-2"}
Now we can edit bucket/main.tf
and introduce a provider for the replication region:
provider "aws" {alias = "replication"region = var.replication_regionendpoints {s3 = "https://s3.${var.replication_region}.amazonaws.com"}}
Then we can create our new destination bucket:
resource "random_id" "destination_bucket_id" {byte_length = 8}resource "aws_s3_bucket" "destination" {bucket = "tf-destination-bucket-${random_id.destination_bucket_id.hex}"tags = {"x-nitric-${var.stack_id}-name" = "tf-destination-bucket-${random_id.destination_bucket_id.hex}""x-nitric-${var.stack_id}-type" = "bucket"}provider = aws.replication}
And enable versioning for both source and destination:
resource "aws_s3_bucket_versioning" "destination" {bucket = aws_s3_bucket.destination.idversioning_configuration {status = "Enabled"}provider = aws.replication}resource "aws_s3_bucket_versioning" "source" {bucket = aws_s3_bucket.bucket.idversioning_configuration {status = "Enabled"}}
Configure IAM to allow replication
First, we need to set up an IAM policy document detailing permissions needed for S3 replication.
Full documentation can be found on the Terraform registry.
data "aws_iam_policy_document" "assume_role" {statement {effect = "Allow"principals {type = "Service"identifiers = ["s3.amazonaws.com"]}actions = ["sts:AssumeRole"]}}# Generate a random id for the IAM roleresource "random_id" "iam_role_id" {byte_length = 8}resource "aws_iam_role" "replication" {name = "tf-iam-role-replication-${random_id.iam_role_id.hex}"assume_role_policy = data.aws_iam_policy_document.assume_role.json}data "aws_iam_policy_document" "replication" {statement {effect = "Allow"actions = ["s3:GetReplicationConfiguration","s3:ListBucket",]resources = [aws_s3_bucket.bucket.arn]}statement {effect = "Allow"actions = ["s3:GetObjectVersionForReplication","s3:GetObjectVersionAcl","s3:GetObjectVersionTagging",]resources = ["${aws_s3_bucket.bucket.arn}/*"]}statement {effect = "Allow"actions = ["s3:ReplicateObject","s3:ReplicateDelete","s3:ReplicateTags",]resources = ["${aws_s3_bucket.destination.arn}/*"]}}resource "aws_iam_policy" "replication" {name = "tf-iam-role-policy-replication-${random_id.iam_role_id.hex}"policy = data.aws_iam_policy_document.replication.json}resource "aws_iam_role_policy_attachment" "replication" {role = aws_iam_role.replication.namepolicy_arn = aws_iam_policy.replication.arn}
Set up replication to the destination bucket
Next, we set up the replication configuration for the original bucket in the module.
As per the documentation, this config includes a filter so that only objects prefixed with "foo" will be replicated.
resource "aws_s3_bucket_replication_configuration" "replication" {depends_on = [aws_s3_bucket_versioning.destination,aws_s3_bucket_versioning.source]role = aws_iam_role.replication.arnbucket = aws_s3_bucket.bucket.idrule {id = "s3-replication-rule-${random_id.bucket_id.hex}"filter {prefix = "foo"}status = "Enabled"destination {bucket = aws_s3_bucket.destination.arnstorage_class = "STANDARD"}delete_marker_replication {status = "Enabled" # or "Disabled" based on your requirements}}}
Build and use your updated provider
The Nitric project includes a make file that will build and install your provider as nitric/awstf@0.0.1
by default.
Navigate to nitric/cloud/aws
and run make install
to build and install the modified provider binary.
cd nitric/cloud/awsmake install
The provider can then be used directly in your project's stack file as follows.
# The nitric provider to useprovider: nitric/awstf@0.0.1# The target aws region to deploy toregion: us-east-2
If you don't have a stack file use nitric stack new
to create one.
Because the Terraform providers are in preview, you'll also need to enable beta-providers
in your Nitric project by adding the following to your project's nitric.yaml file:
preview:- beta-providers
You can generate the Terraform project as usual by running the nitric up
command:
nitric up
To deploy the application using Terraform, you can navigate into your Terraform stack directory and use the standard Terraform commands:
terraform initterraform planterraform apply
Finally, log into the AWS console to verify the replication configuration was applied.