There and Back Again… with AWS Auto Scaling
Auto scaling has been one of the most talked about AWS services, and for good reason. Briefly, auto scaling makes available instrumentation and automation that is typically employed to address availability and ‘cost optimization’. It does this by enabling and managing horizontal scale of grouped EC2 instances. Members of that group can be dynamically added and removed based on a number of defined and monitored criteria. Additional details can be found here.
Hey, Over Here…
While being able to react with massive scale in an automated fashion (the ‘there’) is indeed one of the more sexy features of auto scaling, practical implications (such as ‘the back again’) are often less touted and overlooked.
While I may not immediately need to dynamically scale out my application, I also don’t need to run my application with resources I have typically allocated in a traditional (whether virtual or physical) static manner. As the comparison illustrates below, with AWS auto scaling I can run my application with significantly less resources (using a smaller instance type for example) and scale out as needed, avoiding any potential waste that occurs at times in traditional environments.
In addition, and quite significantly, when resources are no longer needed, resource usage will automatically scale back / in, and I can reap the cost benefits of the utility computing model.
Bring Your Umbrella
Auto scaling also can be used to add basic protection a single EC2 instance, and depending on the application this may be entirely sufficient (especially compared to manual recovery alternatives). While there will be a momentary service outage in this case, an auto scaling group of one does provide a measure of automated protection. If the instance becomes unhealthy it will be terminated and another instance will provisioned to take its place.
Afore ye go
As with any successful journey, experience and planning are crucial. Auto scaling is not a ‘silver bullet’ to any of the considerations mentioned above and without proper configuration and application can have quite the opposite results. Without availability of a managed and ‘standardized’ Amazon Machine Image (AMI) and the Launch Configuration that will use it, I would not achieve the desired outcome.
However, when appropriately configured, I achieve greater flexibility. For example, with an Elastic Load Balancer (ELB) in front of my EC instance I can introduce and redirect traffic to an additional EC instance that is perhaps more updated / patched or serves a new version of my application with little to no service interruption. If I have to back out of a change, I can also complete this very rapidly with minimal consequence. And while the preceding description of events is of course an oversimplification, the capability is a reality.
While, generally speaking, scaling out quickly and scaling in slowly provides a favourable balance of application performance, availability, and cost, my applications also have specific unique requirements. Deep application knowledge and objective metrics then can go further to inform refinements to auto scaling configuration.
And this just the tip of a constantly developing iceberg: there are several additional AWS platform services that can further augment my applications and services.
More on these to come…
Latest posts by Taylor Graham (see all)
- iTMethods Renews Status as Amazon Web Services (AWS) Managed Service Provider - May 25, 2017
- News Release: iTMethods Recognized as a Member of AWS Service Delivery Program for Aurora and Database Migration Service - November 29, 2016
- iTMethods Recognized as an Inaugural Member of Amazon Web Services’ (AWS’) Public Sector Partner Program - November 29, 2016