Ruby on Rails recently launched support for compiling static assets, such as JavaScript, using Webpack. Among other things, Webpack is more powerful at JS compilation when compared to the previous Rails default of Sprockets. Integration with Rails is provided by the Webpacker gem. Several features that I was interested in leveraging were tree shaking and support for the NPM package repository. With Sprockets, common JS libraries such as ReactJS had to be imported using Gems such as react-rails or classnames-rails. This added friction to adding new dependencies and upgrading to new versions of dependencies.
Introduction
Pretty much every programming language out there has tools that statically analyze your source code and detect different problems. These problems can range from simple things like ensuring that you have consistent casing for variable names in Java to ruthlessly enforcing method limits in Ruby. If you’ve ever used one of these tools, they may seem overbearing and not worth the hassle, but they will soon prove their value once your application becomes larger, has multiple developers, or is business critical and can’t afford outages caused by trivial mistakes. Static analysis tools are a super-low cost solution for improving the quality of a code-base.
Note: I’m going to use AWS services as most of my examples for this post, but that’s just because I’m most familiar with them, the patterns found below are not limited to just AWS and can be applied to any cloud provider or self-hosted where similar patterns exist.
Introduction
Every service has some amount of supporting infrastructure required to support it. This includes any virtual servers (EC2 or other), storage (ex. S3, DynamoDB), load balancing, etc. basically any resources that your service uses that is not your direct business logic could be considered infrastructure. If you use continuous integration and change control on your business logic, then why would you not apply the same rules to your infrastructure?
Docker containers are the latest craze taking the world by storm. They enable software vendors to have more control over how their software is executed reducing the amount of work that software hosters need to be responsible for. By shifting the burden of figuring out environment requirements on to the software vendor, certain critical decisions that help improve security can be made once and only once and distributed to end-users. This reduces the cost barrier of having more stable/secure software as users no-longer have to think about intricacies of security and management, which we can see that users rarely take the time to invest in.
Disclaimer: At the time of this article’s writing, I work at Amazon, but not in AWS. This article is based on my own research and ideas and is not the official position of Amazon. This article is not intended as marketing material for AWS, only as some architectural patterns for you to use if you do leverage AWS.
AWS provides a number of different resources that you can use to build services using, including S3 buckets, SQS queues, etc. When you create a new instance of that resource, you must pick a name that usually must be unique in a given namespace. Depending on your naming scheme, you may also have to start embedding resource names in code or configuration files. This makes spinning up new regions difficult as now you have to update configuration with names for every stage/region that you might use. This may not seem like that big of a deal, but consider that you may have tens of different SQS queues, S3 buckets, etc. for each region/stage. This can begin to combinatorically explode as you now have # regions * # stages * # resources
of different configuration definitions. This results in a lot of boilerplate.