Adrian’s top AWS updates — Nov 27th, 2019
So, buckle up, and let me give you a curated list of things that I find most important; matters related to architecture, scalability, reliability, performance, resiliency, DevOps, or anything else that catches my eye. No discrimination, I can’t learn or like everything :)
You can now convert your existing DynamoDB tables to a global table without taking any downtime on your table. Previously, you could only make a global table from an empty table.
When you add an AWS Region to your table, DynamoDB begins populating a new replica by using a snapshot of your existing table. You can continue writing to the originating region while DynamoDB builds the new replica, and DynamoDB replicates all in-flight updates automatically to the new replica. Learn more about global tables.
>> Yes, you read correctly! Long-awaited feature.
You can now recycle instances in an Auto Scaling group (ASG) at a regular cadence. The Maximum Instance Lifetime (MIL) parameter helps you ensure that EC2 instances are recycled before reaching a specified lifetime, giving you an automated way to adhere to your security, compliance, and performance requirements.
You can either create a new ASG or update an existing one to include the Maximum Instance Lifetime value (between 7 and 365 days). Learn more about MIL in EC2 Auto Scaling.
>> Max uptime has to die! So, I love that, and it is a sort of automated chaos-monkey ;)
ALB support for Weighted Target Groups routing. You can do weighted routing of the traffic forwarded by a rule to multiple target groups. The weights can be set in integer range from 0–999 and can be changed up or down as often as desired. All the target group types — instance, IP and Lambda are supported. To learn more, check the demo page or read the blog post.
>> HUGE! As you can see from my tweet — I think this is probably the most crucial launch here.
Application Load Balancer now supports Least Outstanding Requests algorithm for load balancing requests
The least outstanding requests (LOR) algorithm is now available for the Application Load Balancer. You can opt to use the LOR algorithm to route requests within a target group. With the LOR algorithm, as the new request comes in, the load balancer will send it to the target with the least number of outstanding requests. Targets processing long-standing requests or having lower processing capabilities are not burdened with more requests, and the load is evenly spread across targets. That also helps the new targets to take the pressure off of overloaded targets effectively. Learn more about LOR.
>> Don’t send requests to servers that are already busy. That is good!
Amazon EC2 Auto Scaling lets you now include instance weights in Auto Scaling groups (ASGs) that are configured to provision and scale across multiple instance types. Instance weights define the capacity units that each instance type would contribute to your application’s performance, providing greater flexibility for instance type selection that can be included in your ASG Learn more about instance weighting.
>> I can see some exciting consequences for composite workloads and optimizations — significant for scaling.
Amazon DynamoDB adaptive capacity now handles imbalanced workloads better by isolating frequently accessed items automatically
Amazon DynamoDB adaptive capacity handles imbalanced workloads better by isolating frequently accessed items automatically. If your application drives disproportionately high traffic to one or more items, DynamoDB will re-balance your partitions such that frequently accessed items do not reside on the same partition. Adaptive capacity is on by default, and it is provided to you at no additional cost for all DynamoDB tables and global secondary indexes. Learn more about Isolate Frequently Accessed Items.
>> Under the hood! I love that :)
You can set a dead-letter queue (DLQ) to an Amazon Simple Notification Service (SNS) subscription to capture undelivered messages. Amazon SNS DLQs make your application more resilient and durable by storing messages in case your subscription endpoint becomes unreachable. Learn more by reading the blog post.
>> Handling failure is a good thing ;)
Amazon CloudWatch Synthetics lets you monitor application endpoints more easily by collecting canary traffic. Canary can continually verify your customer experience even when your applications doesn’t get any traffic, enabling you to discover issues before your customers do. CloudWatch Synthetics runs tests on your endpoints every minute, 24x7, and alerts you when your application endpoints don’t behave as expected.
>> Monitoring is the key to success. And, it is always better to learn about failure from your monitoring system rather than from Twitter.
CloudWatch Contributor Insights for Amazon DynamoDB (Preview) helps you identify frequently accessed keys and database traffic trends
Amazon CloudWatch Contributor Insights for Amazon DynamoDB is a new diagnostic tool (in preview) that provides a view of the traffic trends of your DynamoDB table and helps you identify the most frequently accessed keys. You can use this information to understand the application’s traffic patterns better and act accordingly.
>> This is really interesting. Understanding access patterns is fundamental to building quality applications.
AWS Lambda now supports four failure-handling features for processing Kinesis and DynamoDB streams: Bisect on Function Error, Maximum Record Age, Maximum Retry Attempts, and Destination on Failure.
These new features allow you to customize responses to data processing failures and build more resilient stream processing applications. Learn more about failure handling.
>> Handling failure is a good thing, again ;)
Trace maps let you visually map the end-to-end path of a single request and understand which service is causing disruption. Additionally, you can visually identify where the error originated and how it affected other services in the request path. Learn more about trace maps.
>> Did I mention monitoring is the key to success? Observability as well!
Amazon S3 Replication Time Control (S3 RTC) is a new feature of S3 Replication that provides a predictable replication time backed by a Service Level Agreement (SLA). S3 RTC helps customers meet compliance or business requirements for data replication, and provides visibility into the replication process with new Amazon CloudWatch Metrics.
S3 Replication Time Control is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects replicated in seconds. More about S3 RTC.
>> Setting expectations is essential. Especially in a DR scenario.