Dynamic AWS resource discovery for one-click region spin-ups

Disclaimer: At the time of this article’s writing, I work at Amazon, but not in AWS. This article is based on my own research and ideas and is not the official position of Amazon. This article is not intended as marketing material for AWS, only as some architectural patterns for you to use if you do leverage AWS.

AWS provides a number of different resources that you can use to build services using, including S3 buckets, SQS queues, etc. When you create a new instance of that resource, you must pick a name that usually must be unique in a given namespace. Depending on your naming scheme, you may also have to start embedding resource names in code or configuration files. This makes spinning up new regions difficult as now you have to update configuration with names for every stage/region that you might use. This may not seem like that big of a deal, but consider that you may have tens of different SQS queues, S3 buckets, etc. for each region/stage. This can begin to combinatorically explode as you now have # regions * # stages * # resources of different configuration definitions. This results in a lot of boilerplate.

But what if there was a better way?

Here I propose a slightly different method of naming and interacting with resources such that you don’t need to manually pick names for S3 buckets and hope nobody else is using them.

For example, an S3 bucket name must be globally unique across all other AWS customers. This is because each S3 bucket exists in DNS as https://{bucket_name}.s3.amazonaws.com.

Say I have a service ‘Foo Service’ and I currently operate in one region and am not planning on expanding to other regions. I might create the S3 bucket foo-service, then I add some config to use my bucket:


This works fine until I need to scale up to another region and create a new bucket foo-service-eu-west-1. Now my config will continue to grow for every region:

s3BucketName.us-east-1=foo-service s3BucketName.eu-west-1=foo-service-eu-west-1

While it does not look like much, it still multiplies as you have more resources and regions and it increases the risks of a mistake during a critical time of spinning up a new region. What other options are there?

Say I know I’ll need an S3 bucket in all regions that I operate: us-east-1, us-west-2, and eu-west-1. I might configure my service to use the bucket named foo-service-{region} (automatically replacing {region} with whatever region the service is running in.) I then create three buckets: foo-service-us-east-1, foo-service-us-west-2, foo-service-eu-west-1. This works great until somebody else discovers the pattern and creates a bucket with the name foo-service-ap-northeast-1. Now I’m stuck and have to rework the bucket logic to support other naming patterns to get around this.

Another risk with using the patterned approach is that I have hard-coded the fact that the only parameter in the resource name is {region}. I may want to add a testing environment and add a stage parameter such that I have foo-service-{region}-{stage}. This is difficult once you’re in production and have critical data stored in buckets that you can’t remove. Another strategy for ensuring high-availability is to shard across many different AWS accounts and stacks to reduce the blast radius of something going wrong. If you don’t predict this requirement during design, this might come back and become more difficult to fix later.

Instead I should configure my service to connect to the AWS account and automatically detect the correct bucket. If I were using CloudFormation to configure my resources (which is a separate article on why it’s powerful,) then I could just delegate naming of the bucket to CF and it’ll automatically generate a unique bucket name for me.

CloudFormation API Documentation

  Type: AWS::S3::Bucket
    - Name: bucket-purpose
      Value: widget-storage

This CloudFormation snippet will automatically generate a new randomly generate bucket with a name like fooservice-widgetbucket-abc123 and a tag foo-service:bucket-purpose=widget-storage. During application start-up, I know I need the ‘widget-storage’ bucket so I just need to find the bucket.

Sample code:

public String findBucketName(AmazonS3 s3Client, String bucketUseTag) {
  List buckets = s3Client.listBuckets();
  for (Bucket bucket : buckets) {
    BucketTaggingConfiguration tags = s3Client.getBucketTaggingConfiguration(bucket.getName());
    if (tags == null) {
    String name = tags.getTagSet().getTag("bucket-purpose");
    if (bucketUseTag.equals(name)) {
      return bucket.getName();
  return null;

And now you can spin up as many regions, stages, and AWS accounts you want using CloudFormation and your service can automatically discover the correct bucket assuming you have a provisioned AWS access key. This works even better when you use IAM instance roles to auto distribute credentials to hosts.

What about SQS?

The same problem can also impact other AWS resource types, such as SQS queues. While an SQS queue name doesn’t have to be globally unique, it still can result in boilerplate configuration if if you have to hard-code every permutation of a given queue name in your config. An SQS queue name only needs to be unique inside of a given AWS Account + Region, since its fully-qualified URL is https://{region}.sqs.amazonaws.com/{account-id}/{queue-name}.

One solution for SQS queues to use static queue names across all regions and stages (i.e. not WorkQueue-us-west-2 or WorkQueue-Beta,) instead just WorkQueue. Then on application start-up use the GetQueueUrl API call to fetch the URL and use that.

While the SQS queue example might seem obvious, but I’ve seen a number of examples of services using the configuration approach when they don’t need to.

Copyright - All Rights Reserved


Comments are currently unavailable while I move to this new blog platform. To give feedback, send an email to adam [at] this website url.